Re: [Users] Testday aftermath

2013-02-01 Thread Kanagaraj

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?


Thanks,
Kanagaraj

On 02/01/2013 03:23 PM, Joop wrote:

Yesterday was testday but not much fun :-(

I had a reasonable working setup but for testday I decided to start from
scratch and that ended rather soon. Installing and configuring engine was
not a problem but I want a setup where I have two gluster hosts and two
hosts as vmhosts.
I added a second cluster using the webinterface set it to gluster storage
and added two minimal installed Fedora 18 hosts where I setup static
networking and verified that it worked.
Adding the two hosts went OK but adding a Volume gives the following error
on engine:

2013-02-01 09:32:39,084 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected :  ID:
8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups
2013-02-01 09:32:39,117 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] START,
CreateGlusterVolumeVDSCommand(HostName = st02, HostId =
e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef
2013-02-01 09:32:39,246 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
[mCode=4106, mMessage=XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput
]
2013-02-01 09:32:39,248 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
[mCode=4106, mMessage=XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput
]
2013-02-01 09:32:39,249 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method
2013-02-01 09:32:39,250 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput

2013-02-01 09:32:39,254 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4)
[5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput

2013-02-01 09:32:39,255 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand,
log id: 4270f4ef
2013-02-01 09:32:39,256 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput

2013-02-01 09:32:39,268 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock
[exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER
, sharedLocks= ]
2013-02-01 09:32:40,902 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName =
st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32

And on ST01 the 

Re: [Users] Problem with libvirt

2013-02-01 Thread Antoni Segura Puimedon
Hola,

Could you make some pastebins with the contents of the files
/etc/sysconfig/network-scripts/ifcfg* ?

Also, virsh -r net-list and the log generated on the process
of losing the connection when creating a guest.

Best,

Toni

- Original Message -
 From: Juan Jose jj197...@gmail.com
 To: Moti Asayag masa...@redhat.com
 Cc: users@ovirt.org
 Sent: Friday, February 1, 2013 10:57:08 AM
 Subject: Re: [Users] Problem with libvirt
 
 
 Hello Monti and Dafna,
 
 
 The host have connectivity with the engine until I try to install a
 VM and in the middle of the process host loses the connectivity. I
 can see that it is a connection problem. How can I check if the host
 address is the same as I used to add it to my data-center?, I made
 an IP address change in the host but I delete it from engine and
 after change its address I re-added.
 
 
 On the other hand, I would like to know if my network configuration
 is correct because when I execute the ipconfig command in host
 console I can see the interfaces bond0 to bond4, em1 interface,
 localhost and ovirtmgmt with the host IP. Is it that correct?
 
 
 Many thanks in avanced,
 
 
 Juanjo.
 
 
 On Thu, Jan 31, 2013 at 3:04 PM, Moti Asayag  masa...@redhat.com 
 wrote:
 
 
 
 On 01/31/2013 03:37 PM, Juan Jose wrote:
  Hello Moti,
  
  The execution of this command in the host is:
 
 This indicates VDSM is up and running correctly, but the ovirt-engine
 can't reach it.
 
 Can you check the connectivity from the ovirt-engine to the host (use
 the same address as used to add it to data-center) ?
 
 Maybe there are iptables issues preventing establishing connection
 from
 the engine to the host.
 
 
 
 
  
  [root@ovirt-host ~]# vdsClient -s 0 getVdsCaps
  HBAInventory = {'iSCSI': [{'InitiatorName':
  'iqn.1994-05.com.redhat:69e9aaf7e4c'}], 'FC': []}
  ISCSIInitiatorName = iqn.1994-05.com.redhat:69e9aaf7e4c
  bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500',
  'netmask':
  '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr':
  '',
  'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
  '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu':
  '1500',
  'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'},
  'bond2':
  {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
  'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {},
  'mtu':
  '1500', 'netmask': '', 'slaves': [], 'hwaddr':
  '00:00:00:00:00:00'}}
  clusterLevels = ['3.0', '3.1']
  cpuCores = 4
  cpuFlags =
  fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_coreduo,model_Conroe
  cpuModel = Intel(R) Core(TM)2 Quad CPU Q9300 @ 2.50GHz
  cpuSockets = 1
  cpuSpeed = 1999.000
  emulatedMachines = ['pc-0.15', 'pc-1.0', 'pc', 'pc-0.14',
  'pc-0.13',
  'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc', 'pc-0.15', 'pc-1.0',
  'pc',
  'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
  guestOverhead = 65
  hooks = {}
  kvmEnabled = true
  lastClient = xxx.xxx.xxx.91
  lastClientIface = ovirtmgmt
  management_ip =
  memSize = 7701
  netConfigDirty = False
  networks = {'ovirtmgmt': {'addr': '158.109.202.67', 'cfg':
  {'DELAY':
  '0', 'IPV6INIT': 'no', 'UUID':
  '3cbac056-822a-43e9-a4ec-df5324becd79',
  'DEFROUTE': 'yes', 'DNS1': 'xxx.xxx.xxx.1', 'IPADDR':
  'xxx.xxx.xxx.67',
  'ONBOOT': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'BROADCAST':
  'xxx.xxx.xxx.255', 'NM_CONTROLLED': 'no', 'NETMASK':
  'xxx.xxx.xxx.0',
  'BOOTPROTO': 'none', 'DNS2': 'xxx.xxx.xxx.9', 'DEVICE':
  'ovirtmgmt',
  'TYPE': 'Bridge', 'GATEWAY': 'xxx.xxx.xxx.1', 'NETWORK':
  'xxx.xxx.xxx.0'}, 'mtu': '1500', 'netmask': '255.255.254.0', 'stp':
  'off', 'bridged': True, 'gateway': 'xxx.xxx.xxx.1', 'ports':
  ['em1']}}
  nics = {'em1': {'hwaddr': '00:19:99:35:cc:54', 'netmask': '',
  'speed':
  1000, 'addr': '', 'mtu': '1500'}}
  operatingSystem = {'release': '1', 'version': '17', 'name':
  'Fedora'}
  packages2 = {'kernel': {'release': '1.fc17.x86_64', 'buildtime':
  1350912755.0, 'version': '3.6.3'}, 'spice-server': {'release':
  '5.fc17',
  'buildtime': '1336983054', 'version': '0.10.1'}, 'vdsm':
  {'release':
  '10.fc17', 'buildtime': '1349383616', 'version': '4.10.0'},
  'qemu-kvm':
  {'release': '2.fc17', 'buildtime': '1349642820', 'version':
  '1.0.1'},
  'libvirt': {'release': '2.fc17', 'buildtime': '1355687905',
  'version':
  '0.9.11.8'}, 'qemu-img': {'release': '2.fc17', 'buildtime':
  '1349642820', 'version': '1.0.1'}}
  reservedMem = 321
  software_revision = 10
  software_version = 4.10
  supportedProtocols = ['2.2', '2.3']
  supportedRHEVMs = ['3.0', '3.1']
  uuid = 36303030-3139-3236-3800-00199935CC54_00:19:99:35:cc:54
  version_name = Snow Man
  vlans = {}
  vmTypes = ['kvm']
  [root@ovirt-host 

Re: [Users] Testday aftermath

2013-02-01 Thread noc

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


Joop


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Shireesh Anjal

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are of 
the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. 
I think because of the v before 3.4, these are considered as lower 
version, and by default yum picks up the rpms from fedora repository.


To work around this issue, you could try:

yum --disablerepo=* --enablerepo=gluster-nieuw install glusterfs 
glusterfs-fuse glusterfs-geo-replication glusterfs-server




Joop




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Joop

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are 
of the form 3.3.1-8, whereas the ones from above QA release are 
v3.4.0qa7. I think because of the v before 3.4, these are considered 
as lower version, and by default yum picks up the rpms from fedora 
repository.


To work around this issue, you could try:

yum --disablerepo=* --enablerepo=gluster-nieuw install glusterfs 
glusterfs-fuse glusterfs-geo-replication glusterfs-server


[root@st01 ~]# yum --disablerepo=* --enablerepo=gluster-nieuw 
install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server


Loaded plugins: langpacks, presto, refresh-packagekit
Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. 
Checking for update.
Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.
Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.

Resolving Dependencies
-- Running transaction check
--- Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be 
installed
-- Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: 
glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64

-- Finished Dependency Resolution
Error: Package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 
(gluster-nieuw)

  Requires: glusterfs = v3.4.0qa7-1.el6
  Installed: glusterfs-3.3.1-8.fc18.x86_64 (@updates)
  glusterfs = 3.3.1-8.fc18
  Available: glusterfs-v3.4.0qa7-1.el6.x86_64 (gluster-nieuw)
  glusterfs = v3.4.0qa7-1.el6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest


[root@st01 ~]# yum --disablerepo=* --enablerepo=gluster-nieuw 
install glusterfs glusterfs-fuse glusterfs-geo-replication 
glusterfs-server --skip-broken


Loaded plugins: langpacks, presto, refresh-packagekit
Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. 
Checking for update.
Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.
Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.

Resolving Dependencies
-- Running transaction check
--- Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be 
installed
-- Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: 
glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64
gluster-nieuw/filelists 
| 7.2 kB  00:00:00


Packages skipped because of dependency problems:
   glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 from gluster-nieuw

Last post, probably, until sunday-evening/monday morning, off to fosdem ;-)

Joop


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Joop

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are 
of the form 3.3.1-8, whereas the ones from above QA release are 
v3.4.0qa7. I think because of the v before 3.4, these are considered 
as lower version, and by default yum picks up the rpms from fedora 
repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had 
a look that folder and repo doesn't have the 'v' in front of it.


Is there someone on this list that has the 'powers' to change that ??

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Kanagaraj

On 02/01/2013 06:47 PM, Joop wrote:

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are 
of the form 3.3.1-8, whereas the ones from above QA release are 
v3.4.0qa7. I think because of the v before 3.4, these are 
considered as lower version, and by default yum picks up the rpms 
from fedora repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just 
had a look that folder and repo doesn't have the 'v' in front of it.



Thats correct.

[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64 
glusterfs-v3.4.0qa7-1.el6.x86_64

glusterfs-3.3.1-8.fc18.x86_64  glusterfs-v3.4.0qa7-1.el6.x86_64


Is there someone on this list that has the 'powers' to change that ??



[Adding Vijay]


Joop



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Storage FC on ovirt 3.2 : /dev/dm* bad chown

2013-02-01 Thread Kevin Maziere Aubry
Hi

My environnement is Fedora18 / Ovirt 3.2 and VDSM vdsm-4.10.3-6.fc18.x86_64.

I use a storage on fiber channel. When creating a VM the disk is created by
ovirt.
When starting the VM the disk has not the permission to be accessed by vdsm.
In fact the device for the disk has root:disks as owner/group.
Changing that by 36.36 permit the VM to start and work fine.
But snapshots do not work, and a reboot cause udev to remap the disk to
another /dev/dm-* device.
My fedora18 has selinux disabled.
On irc :
apuimedo vdsm/storage/blockSD.py:cmd = [constants.EXT_CHOWN,
%s:%s %
apuimedo vdsm/storage/blockSD.py:self.log.error(failed to
chown %s, masterDir)
apuimedo vdsm/storage/fileUtils.py:def chown(path, user=-1, group=-1):
apuimedo vdsm/storage/fileUtils.py:os.chown(path, uid, gid)
apuimedo vdsm/storage/lvm.py:cmd = [constants.EXT_CHOWN,
USER_GROUP, lv_path]



Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage FC on ovirt 3.2 : /dev/dm* bad chown

2013-02-01 Thread Kevin Maziere Aubry
Hi

It looks like this bug report :
https://bugzilla.redhat.com/show_bug.cgi?id=903716

Kevin


2013/2/1 Kevin Maziere Aubry kevin.mazi...@alterway.fr

 Hi

 My environnement is Fedora18 / Ovirt 3.2 and
 VDSM vdsm-4.10.3-6.fc18.x86_64.

 I use a storage on fiber channel. When creating a VM the disk is created
 by ovirt.
 When starting the VM the disk has not the permission to be accessed by
 vdsm.
 In fact the device for the disk has root:disks as owner/group.
 Changing that by 36.36 permit the VM to start and work fine.
 But snapshots do not work, and a reboot cause udev to remap the disk to
 another /dev/dm-* device.
 My fedora18 has selinux disabled.
 On irc :
 apuimedo vdsm/storage/blockSD.py:cmd = [constants.EXT_CHOWN,
 %s:%s %
 apuimedo vdsm/storage/blockSD.py:self.log.error(failed to
 chown %s, masterDir)
 apuimedo vdsm/storage/fileUtils.py:def chown(path, user=-1, group=-1):
 apuimedo vdsm/storage/fileUtils.py:os.chown(path, uid, gid)
 apuimedo vdsm/storage/lvm.py:cmd = [constants.EXT_CHOWN,
 USER_GROUP, lv_path]



 Kevin Mazière
 Responsable Infrastructure
 Alter Way – Hosting
  1 rue Royal - 227 Bureaux de la Colline
 92213 Saint-Cloud Cedex
 Tél : +33 (0)1 41 16 38 41
 Mob : +33 (0)7 62 55 57 05
  http://www.alterway.fr




-- 

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Vijay Bellur

On 02/01/2013 07:38 PM, Kanagaraj wrote:

On 02/01/2013 06:47 PM, Joop wrote:

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.

 Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and
check it out?

How??

I tried adding this repo but but yum says that there are no updates
available, atleast yesterday it did.

[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I
tried through yum localinstall but it will revert when yum update is
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are
of the form 3.3.1-8, whereas the ones from above QA release are
v3.4.0qa7. I think because of the v before 3.4, these are
considered as lower version, and by default yum picks up the rpms
from fedora repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just
had a look that folder and repo doesn't have the 'v' in front of it.


Thats correct.

[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64
glusterfs-v3.4.0qa7-1.el6.x86_64
glusterfs-3.3.1-8.fc18.x86_64  glusterfs-v3.4.0qa7-1.el6.x86_64


Is there someone on this list that has the 'powers' to change that ??



[Adding Vijay]



3.4.0qa8 is available now. Can you please check with that?

Thanks,
Vijay


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-01 Thread Dead Horse
Both nodes are identical and can fully communicate with each other.
Since the normal non p2p live migration works both hosts can reach each
other via the connection URI.
Perhaps I am missing something here?
- DHC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM migrations failing

2013-02-01 Thread Martin Kletzander
On 02/01/2013 09:29 PM, Dead Horse wrote:
 To test further I loaded up two more identical servers with EL 6.3 and the
 same package versions originally indicated. The difference here is that I
 did not turn these into ovirt nodes. EG: installing VDSM.
 
 - All configurations were left at defaults on both servers
 - iptables and selinux disabled on both servers
 - verified full connectivty between both servers
 - setup ssh (/root/authorized keys) between the servers -- this turned out
 to be the key!
 
 Then using syntax found here:
 http://libvirt.org/migration.html#flowpeer2peer
 EG: From the source server I issued the following:


So your client equals to the source server, that makes us sure that the
connection is made on the same network for p2p and non-p2p migration.

 virsh migrate --p2p sl63 qemu+ssh://192.168.1.2/system
 

You're using ssh transport here, but isn't vdsm using tcp or tls?
According to the config file tcp transport is enabled with no
authentication whatsoever...

 It fails in exactly the same way as previously indicated when the
 destination server does not have an ssh rsa pub ID from the source system
 in it's /root/.ssh/authorized_keys file.
 However once the ssh rsa pub ID is in place on the destination system all
 is well and migrations work as expected.
 

..., which would mean you need no ssh keys when migrating using tcp
transport instead.

Also during p2p migration the source libvirt daemon can't ask you for
the password, but when not using p2p the client is connecting to the
destination, thus being able to ask for the password and/or use
different ssh keys.

But it looks like none of this has anything to do with the problem as:

 1) as you found out, changing vdsm versions makes the problem go
away/appear and

 2) IIUC the first error was function is not supported by the
connection driver: virDomainMigrateToURI2, but the second one was
error: operation failed: Failed to connect to remote libvirt URI.

Since I tried finding out why the first error appeared, I probably
misunderstood somewhere in the middle of this thread and am useless
here.  However if I can help from the libvirt POV, I'll follow up this
thread and will see whether there's anything related.

Good luck,
Martin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] 3.2 beta and IPA domain question

2013-02-01 Thread Gianluca Cecchi
Hello,
I seem to remember in RHEV 3.0 that when you configured an IPA domain,
its admin was automatically configured as an admin for RHEV itself.
Is it true and in case does remain true for oVirt?

I configured IPA as shipped on CentOS 6.3+updates
ipa-server-2.2.0-17.el6_3.1.x86_64

I successfully added it to y oVirt 3.2 beta setup

[root@f18engine ~]# engine-manage-domains -action=add
-domain=LOCALDOMAIN.LOCAL -user=admin -provider=IPA -interactive
Enter password:

The domain localdomain.local has been added to the engine as an
authentication source but no users from that domain have been granted
permissions within the oVirt Manager.
Users from this domain can be granted permissions from the Web
administration interface.
oVirt Engine restart is required in order for the changes to take
place (service ovirt-engine restart).
Manage Domains completed successfully

Then
[root@f18engine ~]# systemctl try-restart ovirt-engine.service
[root@f18engine ~]# systemctl status ovirt-engine.service
ovirt-engine.service - oVirt Engine
 Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled)
 Active: active (running) since Sat 2013-02-02 00:10:29 CET; 10s ago
Process: 32512 ExecStop=/usr/bin/engine-service stop (code=exited,
status=0/SUCCESS)
Process: 32520 ExecStart=/usr/bin/engine-service start (code=exited,
status=0/SUCCESS)
Main PID: 32521 (java)
 CGroup: name=systemd:/system/ovirt-engine.service
 └─32521 engine-service -server -XX:+TieredCompilation -Xms1g -Xmx1g
-XX:PermSize=256m -XX:MaxPe...

Feb 02 00:10:28 f18engine.localdomain.local systemd[1]: Starting oVirt Engine...
Feb 02 00:10:29 f18engine.localdomain.local engine-service[32520]:
Started engine process 32521.
Feb 02 00:10:29 f18engine.localdomain.local engine-service[32520]:
Starting engine-service: [  OK  ]
Feb 02 00:10:29 f18engine.localdomain.local systemd[1]: Started oVirt Engine.


Now from web admin portal I can choose the localdomain.local domain
in drop down menu.
But when I try to enter the webadmin portal I get:

User is not authorized to perform this action.


Do I need to grant IPA admin user from internal admin before, or
should it just work?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users