[ovirt-users] Re: LVM Mirroring

2018-09-16 Thread Maor Lipchuk
On Thu, Sep 13, 2018 at 9:59 PM, René Koch  wrote:

> Hi list,
>
> Is it possible to use LVM mirroring for FC LUNs in oVirt 4.2?

I've 2 FC storages which do not have a replication functionality. So the
> idea is to create a LVM mirror using 1 LUN of storage 1 und 1 LUN of
> storage 2 for replicating data between the storages on host level (full
> RHEL 7.5 not oVirt Node). With e.g. pacemaker and plain KVM this works
> fine. Is it possible to have the same setup with oVirt 4.2?
>

(Adding Nir)
Are you asking that in relation to support disaster recovery for oVirt
setup?
I'm not sure if it will help you with what you are lookiong for, but have
you tried to look into Gluster Geo replication:
  https://docs.gluster.org/en/v3/Administrator%20Guide/Geo%20Replication/



> Thanks a lot.
>
>
> Regards,
> René
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/PESPIMJW7QAY2W5ZLOCQIYVQI2Q3WCB5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QXNDD2VVAX2WWQSJN3ZX6SKKM6TJZWK/


[ovirt-users] Re: moving disks around gluster domain is failing

2018-09-16 Thread Maor Lipchuk
On Fri, Sep 14, 2018 at 3:27 PM,  wrote:

> Moving disk from one gluster domain to another fails, either with the vm
> running or down..
> It strikes me that it says : File 
> "/usr/lib64/python2.7/site-packages/libvirt.py",
> line 718, in blockCopy
> if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
> dom=self)
> I'am sending the relevant piece of log..
>
> But it should be a file copy since it's gluster am I right?.
> Gluster volumes are lvm thick and have different shard sizes...
>
> 2018-09-14 15:05:53,325+0300 ERROR (jsonrpc/2) [virt.vm]
> (vmId='f90f6533-9d71-4102-9cd6-2d9960a4e585') Unable to start replication
> for sda to {u'domainID': u'd07231ca-89b8-490a
> -819d-8542e1eaee19', 'volumeInfo': {'path': u'vol3/d07231ca-89b8-490a-
> 819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-
> 48d446b3eba6/5716acc8-7ee7-4235-aad6-345e565f3073', 'type
> ': 'network', 'hosts': [{'port': '0', 'transport': 'tcp', 'name':
> '10.252.166.129'}, {'port': '0', 'transport': 'tcp', 'name':
> '10.252.166.130'}, {'port': '0', 'transport': 'tc
> p', 'name': '10.252.166.128'}], 'protocol': 'gluster'}, 'format': 'cow',
> u'poolID': u'90946184-a7bd-11e8-950b-00163e11b631', u'device': 'disk',
> 'protocol': 'gluster', 'propagat
> eErrors': 'off', u'diskType': u'network', 'cache': 'none', u'volumeID':
> u'5716acc8-7ee7-4235-aad6-345e565f3073', u'imageID':
> u'3d95e237-441c-4b41-b823-48d446b3eba6', 'hosts': [
> {'port': '0', 'transport': 'tcp', 'name': '10.252.166.129'}], 'path':
> u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7-4235
> -aad6-345e565f3073', 'volumeChain': [{'domainID':
> u'd07231ca-89b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path':
> u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237
> -441c-4b41-b823-48d446b3eba6/26214e9d-1126-42a0-85e3-c21f182b582f',
> 'volumeID': u'26214e9d-1126-42a0-85e3-c21f182b582f', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/10.252.1
> 66.129:_vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/26214e9d-1126-42a0-85e3-c21f182b582f.lease',
> 'imageID': u'3d95e237-441c-4b41-b823-
> 48d446b3eba6'}, {'domainID': u'd07231ca-89b8-490a-819d-8542e1eaee19',
> 'leaseOffset': 0, 'path': u'vol3/d07231ca-89b8-490a-
> 819d-8542e1eaee19/images/3d95e237-441c-4b41-b823-48d44
> 6b3eba6/2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64', 'volumeID':
> u'2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_vol3/d07231ca
> -89b8-490a-819d-8542e1eaee19/images/3d95e237-441c-4b41-
> b823-48d446b3eba6/2c6c8b45-7d70-4eff-9ea5-f2a377f0ef64.lease', 'imageID':
> u'3d95e237-441c-4b41-b823-48d446b3eba6'}, {'dom
> ainID': u'd07231ca-89b8-490a-819d-8542e1eaee19', 'leaseOffset': 0,
> 'path': u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/5716acc8-7ee7
> -4235-aad6-345e565f3073', 'volumeID': u'5716acc8-7ee7-4235-aad6-345e565f3073',
> 'leasePath': u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_
> vol3/d07231ca-89b8-490a-819d-8542e
> 1eaee19/images/3d95e237-441c-4b41-b823-48d446b3eba6/
> 5716acc8-7ee7-4235-aad6-345e565f3073.lease', 'imageID':
> u'3d95e237-441c-4b41-b823-48d446b3eba6'}, {'domainID': u'd07231ca-89
> b8-490a-819d-8542e1eaee19', 'leaseOffset': 0, 'path':
> u'vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/
> 3d95e237-441c-4b41-b823-48d446b3eba6/579e0033-4b94-4675-af78-d017ed2698
> e9', 'volumeID': u'579e0033-4b94-4675-af78-d017ed2698e9', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/10.252.166.129:_
> vol3/d07231ca-89b8-490a-819d-8542e1eaee19/images/3d95e237-
> 441c-4b41-b823-48d446b3eba6/579e0033-4b94-4675-af78-d017ed2698e9.lease',
> 'imageID': u'3d95e237-441c-4b41-b823-48d446b3eba6'}]} (vm:4710)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4704, in
> diskReplicateStart
> self._startDriveReplication(drive)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4843, in
> _startDriveReplication
> self._dom.blockCopy(drive.name, destxml, flags=flags)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 98, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
> line 130, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
> 92, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in
> blockCopy
> if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
> dom=self)
> libvirtError: argument unsupported: non-file destination not supported yet
>

Hi,

I think it could be related to https://bugzilla.redhat.com/1306562 (based
on https://bugzilla.redhat.com/1481688#c38)
Denis what do you think?

Regards,
Maor



> 2018-09-14 15:05:53,328+0300 INFO  (jsonrpc/2) [api.virt] FINISH
> diskReplicateStart 

[ovirt-users] Re: NFS Multipathing

2018-09-16 Thread Maor Lipchuk
On Fri, Sep 14, 2018 at 2:41 PM,  wrote:

> Hi,
> It should be possible, as oVirt is able to support NFS 4.1
> I have a Synology NAS which is also able to support this version of the
> protocol, but never found time to set this together and test it until now.
> Reagrds
>
>
> Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a écrit:
>
> Hello all,
>
> I've been  looking around but I've not found anything definitive on
> whether Ovirt can do NFS Multipathing, and if so how?
>
> Does anyone have any good how tos or configuration guides?
>
>
I know that you asked about NFS, but if that helps oVirt do support
multipath for iSCSI storage domain:

https://ovirt.org/develop/release-management/features/storage/iscsi-multipath/
Hope it helps

Regards,
Maor


>
> Thanks,
>
> Thomas
>
>
> --
> FreeMail powered by mail.fr
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/TT2SDAQSZJQ4TWI6Q5AAYCJCZSTGH3HH/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGJPFEYTGX444R34TPP6CVANGJ2KZAUH/


[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread Maor Lipchuk
Thanks for sharing!

On Sun, Jul 8, 2018 at 1:42 PM, Hesham Ahmed  wrote:

> Apparently this is known default behavior of LVM snapshots and in that
> case maybe Cockpit in oVirt node should create mountpoints using
> /dev/mapper path instead of UUID by default. The timeout issue
> persists even after switching to /dev/mapper/devices in fstab
> On Sun, Jul 8, 2018 at 12:59 PM Hesham Ahmed  wrote:
> >
> > I also noticed that Gluster Snapshots have the SAME UUID as the main
> > LV and if using UUID in fstab, the snapshot device is sometimes
> > mounted instead of the primary LV
> >
> > For instance:
> > /etc/fstab contains the following line:
> >
> > UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
> > auto inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-
> hlpE-TteI-lMhvvt
> > 0 0
> >
> > # lvdisplay gluster00/lv01_data01
> >   --- Logical volume ---
> >   LV Path/dev/gluster00/lv01_data01
> >   LV Namelv01_data01
> >   VG Namegluster00
> >
> > # mount
> > /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
> > /gluster_bricks/gv01_data01 type xfs
> > (rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=
> 1024,swidth=2048,noquota)
> >
> > Notice above the device mounted at the brick mountpoint is not
> > /dev/gluster00/lv01_data01 and instead is one of the snapshot devices
> > of that LV
> >
> > # blkid
> > /dev/mapper/gluster00-lv01_shaker_com_sa:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> > /dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
> > UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> >
> > Notice the UUID of LV and its snapshots is the same causing systemd to
> > mount one of the snapshot devices instead of LV which results in the
> > following gluster error:
> >
> > gluster> volume start gv01_data01 force
> > volume start: gv01_data01: failed: Volume id mismatch for brick
> > vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
> > be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
> > 55e97e74-12bf-48db-99bb-389bb708edb8 found
> >
> > On Sun, Jul 8, 2018 at 12:32 PM  wrote:
> > >
> > > I am facing this trouble since version 4.1 up to the latest 4.2.4, once
> > > we enable gluster snapshots and accumulate some snapshots (as few as 15
> > > snapshots per server) we start having trouble booting the server. The
> > > server enters emergency shell upon boot after timing out waiting for
> > > snapshot devices. Waiting a few minutes and pressing Control-D then
> boots the server normally. In case of very large number of snapshots (600+)
> it can take days before the sever will boot. Attaching journal
> > > log, let me know if you need any other logs.
> > >
> > > Details of the setup:
> > >
> > > 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
> > > oVirt 4.2.4
> > > oVirt Node 4.2.4
> > > 10Gb Interface
> > >
> > > Thanks,
> > >
> > > Hesham S. Ahmed
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/ZOHBIIIKKVZDWGWRXKZ5GEZOADLCSGJB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/STAK2QO6Q3YDE4LAP7UBPNMEURV24WMP/


[ovirt-users] Re: Advice on deploying oVirt hyperconverged environment Node based

2018-07-08 Thread Maor Lipchuk
cc'ing Denis and Sahina,
Perhaps they can share their expirience and insights with the hyperconverged
environment.

Regards,
Maor

On Fri, Jul 6, 2018 at 9:47 AM, Tal Bar-Or  wrote:

> Hello All,
> I am about deploying a new Ovirt system for our developers that we plan to
> be hyperconverged environment Node based.
>
> The system workload would be mostly used for builders that compiling our
> code , which involves with lots of small files and intensive IO.
> I plan to build two glustered volume "layers" one based on sas drives for
> OS spin on, and second for Nvme based for intensive IO.
> I would expect that the system will be resilient/high availability and in
> the same time give enough  good IO request for vm builders that will be at
> least 6 to 8 vm guests.
> The system hardware would be as follows:
> *chassis*: 4x HP DL380 gen8
> *each server hardware:*
> *cpu*: 2x e5-2690v2
> *memory*:256GB
> *Disks*:12x 1.2TB sas 10k disks , 2 mirror for os (or using usk kingstone
> 2x 128gb mirror) rest for vm os volume.
> *Nvme*: 2x960GB Kingstone KC1000  for builders compiling source code
> *Network: *4 ports  Intel 10Gbit/s SFP +
>
> Given above configuration and theory,  my question would be what would be
> best practice in terms of Gluster configuration 
> *Distributed,Replicated,Distributed
> Replicated,Dispersed,Distributed Dispersed*?
> What is the suggestion for hardware raid type 5 or 6 , or use ZFS?
> Network nodes communication , i intend to use 3 ports for storage
> communication and one port for guests network , my question regarding
> Gluster inter communication , what is better would i gail from 3x 10G LACP
> or 1 network for each gluster volume?
>
> Please advice
> Thanks
>
> --
> Tal Bar-or
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users@ovirt.org/message/4XKEEDT2HHVDQU7FZCANZ26UOMFJTBE5/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GZDN4VQ65ZY2VE7GLZSG6X4W533DECEG/


[ovirt-users] Re: Ovirt/ RHV Disaster Recovery Designing

2018-07-07 Thread Maor Lipchuk
Thanks Greg for the quick respond and cc'ing me.

Hi Tanzeeb, great to talk with you.
Please allow me to comment inline and I hope it will clear things out for
you.
Basically the solution we are discussing about is the site-so-site solution
for oVirt-DR
The official documentation for it is at
[1]
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/pdf/disaster_recovery_guide/Red_Hat_Virtualization-4.2-Disaster_Recovery_Guide-en-US.pdf

I recomment that you take a look into that documentation since it might
answer many of your questions.

Please feel free to contact me on anything that is still unclear

Regards,
Maor

On Sat, Jul 7, 2018 at 12:38 AM, Greg Sheremeta  wrote:

> Hi Tanzeeb,
>
> Unfortunately googling for 'ovirt disaster recovery' doesn't achieve a
> great result. Search for things developed primarily by Maor [cc'd] recently.
>
> On Fri, Jul 6, 2018 at 12:59 PM  wrote:
>
>> Hi
>> I've looking for some ideas about designing disaster recovery planning
>> with ovirt 4.2. Since 4.2 we have options to have intergration with
>> disaster recovery site also. I went through some videos at internet and
>> want to share if those are correct or not. Thus you can help me regarding
>> what other things I should be having ideas during planning and designing
>> more about this.
>>
>> 1. We've to keep both sites on and already created datacenter, cluster
>> and one site with storage domain attached and other site with no storage
>> domain.
>
>
That is correct


> 2. We've to keep latency of 10ms at maximum between both sites.
>
>
That depends on your replication configuration process, sounds good to me.


> 3. Have to configure virtual machines with affinity group.
>>
>
You don't have to do that to support the DR site-to-site


> 4. The VM's which are configured as high-available vm, will be migrated
>> first.
>>
>
Not exactly, all the VMs will be migrated, regardless whether those are HA
or not, the HA VMs will run first (if those were running before on the
primary setup)


> 5. There's an Ansible script while have to run to fail over and fail back.
>>
>
Indeed, and also a python script that make it more user friendly.


>
>> Now, please could you help me out regarding this.
>> 1. Where can I find this ansible script? Is this
>> https://github.com/oVirt/ovirt-ansible-disaster-recovery.git?
>>
>
> yes. Note the youtube links on that.
>
> @Maor for the rest of your questions.
>
>
>> 2. Can this script run at rhv also ? or just for ovirt? Or what other
>> changes I have to make for rhv to have this functionalities?
>>
>
Theoratically, the script should also run on RHV as well, but I would not
recommend to mix between the two in one setup.


> 3. How ovirt/rhv works with the OVN compared to NSX? Is there's any
>> challenges ?
>>
>
I don't think that is related to the oVirt DR solution.
@Dan perhaps you can shed more light on that subject.


> 4. Can you share me some high and low level diagram related to rhv
>> disaster planning?
>>
>
Sure, I suggest that you look into the following documentation (see [1]) at
Figure 3.1. Active-Passive Configuration and Figuire 3.2:

I think that what is more important for you are the steps to prepare your
setup to support DR, this should also be in the documentation mentioned
above.

5. How the storage migrates on backend ? During the script ?
>>
>

The storage is not migrated, it should be replicated on the secondary
site's storage server and the secondary site should import those storage
domains.
The storage replication is managed by the admin.


> 6. Where do I run this script? During fail over at primary site and during
>> fail back at secondary site?
>>
>
The script can run on any host, while that host has python-oVirt SDK
installed (see
https://www.ovirt.org/develop/release-management/features/infra/python-sdk/
).
You can also run this script on the primary site or the secondary site.
My suggestion is that you will have a seperate machine/VM/container which
will be the the one which will run it.


> 7. Basically since only the storage migrates and all other components are
>> on already so basically backend it's only dealing with storage migration
>> according to defined policies ?
>>
>
The storage domains does not get migrated, they are being replicated.
It is a DR solution based on the storage servers, since those contain the
OVF_STORE disk which should help to recover the VMs/Templates in the setup.



>
>> Please could you help me out with some designing and ideas what're the
>> best architecture during planning on disaster recovery at RHV.
>>
>
I suggest that you take a look at [1].
There should be an explanation of the steps and also several usecases how
to test it (see APPENDIX B. TESTING THE ACTIVE-PASSIVE CONFIGURATION)


>
> If you are a Red Hat customer, a Red Hat Consulting engagement could also
> help.
>
>
>> Thank you.
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an 

[ovirt-users] Re: ovirt with multiple engines

2018-05-17 Thread Maor Lipchuk
On Thu, May 17, 2018 at 2:00 AM, Nir Soffer  wrote:

> On Wed, May 16, 2018 at 7:37 PM Michael Watters 
> wrote:
>
>> Is it possible to have multiple engines with different versions of ovirt
>> running in the same cluster?  I am working on a plan to upgrade our
>> ovirt cluster to the 4.2 release however we would like to have a
>> rollback plan in case there are issues with the new engine.
>>
>
> You can run multiple engines, each on a different host (or vm). Then
> you can remove entities from one engine and add them to the other.
> If something goes wrong, you can move the entities back to the orignal
> engine.
>
> I think the new DR support can make this process easy and robust,
> but not sure it works with older engine versions.
>

DR supports engine 4.2 and above, but the usecase is a bit different than
what's described.
You can try to use replicated storage domains in the new setup, and then
try to register those VMs and upgrade them to cluster 4.2
If any problem will occur you can shutdown the hosts in the new setup and
start the old setup back again.


>
> Adding Maor to add more info on this direction.
>
> I think that hosted engine upgrade flow may also be useful, dumping
> engine database on the old engine, and restoring it to a new engine,
> and it may work better for upgrading between different versions.
>
> Simone is the expert in this area.
>
> Nir
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Virtual Machine Storage Size

2018-04-30 Thread Maor Lipchuk
By "used storage size of a VM" do you mean the disks which are attached to
it?

If it does then you can check each attached disk of the VM and summerize
their sizes.
For getting the disks attachments of the VM you can use the following REST
URL:
http://111.11.11.111:8080/ovirt-engine/api/vms/---222/diskattachments

Then you can use the following URL for each disk attachment:
  http://111.11.11.111:8080/ovirt-engine/api/disks/-xxx-xxx-

Here is an example of the output:

test_Disk1

200704
test_Disk1
data
cow
6c01687e-54b5-449d-be26-8ecfc092e4f8
false
1073741824
qcow2_v3
false
true
ok
image
200704
false



Regards,
Maor

On Thu, Apr 26, 2018 at 1:05 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:

> Hi Team,
>
> How to get the total and used storage size of a VM? When I run statistics
> Rest URL I get the following response, Which is the Total and Used storage
> in it?
>
> {
> "next_run_configuration_exists": "false",
> "numa_tune_mode": "interleave",
> "status": "down",
> "stop_time": 1524728637295,
> "original_template": {
> "href": "/ovirt-engine/api/templates/
> ----",
> "id": "----"
> },
> "statistics": {
> "statistic": [
> {
> "kind": "gauge",
> "type": "integer",
> "unit": "bytes",
> "values": {
> "value": [
> {
> "datum": 1073741824
> }
> ]
> },
> "vm": {
> "href": "/ovirt-engine/api/vms/
> b89deb8c-882c-47a4-9911-b4978cddcded",
> "id": "b89deb8c-882c-47a4-9911-b4978cddcded"
> },
> "name": "memory.installed",
> "description": "Total memory configured",
> "href": "/ovirt-engine/api/vms/
> b89deb8c-882c-47a4-9911-b4978cddcded/statistics/5a89a1d2-32be-33f7-a0d1-
> f8b5bc974ff6",
> "id": "5a89a1d2-32be-33f7-a0d1-f8b5bc974ff6"
> },
> {
> "kind": "gauge",
> "type": "integer",
> "unit": "bytes",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "vm": {
> "href": "/ovirt-engine/api/vms/
> b89deb8c-882c-47a4-9911-b4978cddcded",
> "id": "b89deb8c-882c-47a4-9911-b4978cddcded"
> },
> "name": "memory.used",
> "description": "Memory used (agent)",
> "href": "/ovirt-engine/api/vms/
> b89deb8c-882c-47a4-9911-b4978cddcded/statistics/b7499508-c1c3-32f0-8174-
> c1783e57bb08",
> "id": "b7499508-c1c3-32f0-8174-c1783e57bb08"
> },
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "percent",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "vm": {
> "href": "/ovirt-engine/api/vms/
> b89deb8c-882c-47a4-9911-b4978cddcded",
> "id": "b89deb8c-882c-47a4-9911-b4978cddcded"
> },
> "name": "cpu.current.guest",
> "description": "CPU used by guest",
> "href": "/ovirt-engine/api/vms/
> b89deb8c-882c-47a4-9911-b4978cddcded/statistics/ef802239-b74a-329f-9955-
> be8fea6b50a4",
> "id": "ef802239-b74a-329f-9955-be8fea6b50a4"
> },
> {
> "kind": "gauge",
> "type": "decimal",
> "unit": "percent",
> "values": {
> "value": [
> {
> "datum": 0
> }
> ]
> },
> "vm": {
> "href": "/ovirt-engine/api/vms/
> 

Re: [ovirt-users] Posix FS as alternative to local storage?

2018-04-30 Thread Maor Lipchuk
On Mon, Apr 30, 2018 at 2:01 PM, Eduardo Mayoral  wrote:

> On 30/04/18 12:51, Tony Brian Albers wrote:
> > On 30/04/18 11:43, Eduardo Mayoral wrote:
> >> Hi,
> >>
> >>  I would like to set up a new oVirt deployment with hosts that have
> >> the VMs running on local attached storage. I understand this has the
> >> requirement of having each host in its own cluster (and own datacenter,
> >> it seems, I understand the need for the dedicated cluster, not so much
> >> for the dedicated datacenter).
> >>
> >>  At the same time, I would like to have some shared storage domains
> >> so I can use it to export VMs or migrate them around hosts (probably in
> >> three stages, first migrate VM storage from local to the shared storage
> >> domain, second migrate the host (probably not possible to do a "hot"
> >> migration, but at least "cold"), third migrate the VM storage from the
> >> shared storage domain to the local storage domain of the new host).
> >>
> >>  So I thought maybe I can deploy a datacenter in shared storage
> mode,
> >> with one cluster per host. Use one or two shared storage domains for
> >> master and as an stage area for planned VM migrations as explained
> >> before, and then configure several storage domains, one per host, as
> >> posix FS . I would then deploy the VMs on the local posix FS storage
> >> domains and set affinity rules for the VMs to their hosts as needed.
> >>
> >>  Would this work? Is there a better way of achieving local storage
> >> and retaining the ability to share storage among hosts and migrate VMs?
> >>
> >>
> > Have you thought about using glusterfs? If hosts are physically close,
> > that would probably be the best solution.
> >
>
> Actually, yes, I also had glusterfs in mind. However one of the main
> reasons to use local storage is performance, and I am concerned about
> the write latencies of gluster (If using gluster, I would handle things
> so the VM runs on one of the gluster nodes hosting the VM data, so I
> assume the read latency will be close to the one I would get with local
> storage, but the gluster replica(s) will be on other hosts, so write
> latency may be significantly worse).
>
> Thanks a lot for the suggestion, it is a good one, however, the original
> question stands: Would this work? Is there a better way of achieving
> local storage and retaining the ability to share storage among hosts and
> migrate VMs?
>

Hi Eduardo,
We support shared storage domains in a local data center since oVirt 4.1.
Will this help you by any chance?


> Best regards,
>
> --
> Eduardo Mayoral.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help debugging VM import error

2018-04-26 Thread Maor Lipchuk
It seems like a regression from 55a71302ef91c91e8d9d418794ea3be990dcfe71
Since memory disk was not considered a disk entity in the engine, the VM
regisatration worked well.
Now, that the memory disk became an entity and the validation was added, it
should also be addressed in the VM's OVF.
Now it isn't part of the VM's OVF and the disk does not get created in the
DB and that is probably why the operation fails.
Thanks Arik for pointing this out.

Regards,
Maor

On Thu, Apr 26, 2018 at 11:06 AM, Arik Hadas  wrote:

>
> On Mon, Apr 23, 2018 at 10:02 PM, Roy Golan  wrote:
>
>> I suspect
>>
>> List guids = Guid.createGuidListFromString(
>> snap.getMemoryVolume());
>>
>> where guids is empty because there is not memory volume. which fails this
>>
>> StorageDomain sd = 
>> getStorageDomainDao().getForStoragePool(guids.get(0),
>> params.getStoragePoolId());
>>
>
> Note that this analysis is based on an incorrect version of the code since
> it happened with version 4.2.2.6. So it may still happen with the
> up-to-date code on the master branch as well. (Also note that had 'guids'
> be empty the exception would have been IndexOutOfBoundsException rather
> than NPE).
>
>
>>
>>
>> On Mon, 23 Apr 2018 at 22:01 Benny Zlotnik  wrote:
>>
>>> Looks like a bug. Can you please file a report:
>>> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>>>
>>> On Mon, Apr 23, 2018 at 9:38 PM, ~Stack~  wrote:
>>>
 Greetings,

 After my rebuild, I have imported my VM's. Everything went smooth and
 all of them came back, except one. One VM gives me the error "General
 command validation failure." which isn't helping me when I search for
 the problem.

 The oVirt engine logs aren't much better at pointing to what the failure
 is (posted below).

 Can someone help me figure out why this VM isn't importing, please?

 Thanks!
 ~Stack~


 2018-04-23 13:31:44,313-05 INFO
 [org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigur
 ationCommand]
 (default task-72) [6793fe73-7cda-4cb5-a806-7104a05c3c1b] Lock Acquired
 to object 'EngineLock:{exclusiveLocks='[infra01=VM_NAME,
 0b64ced5-7e4b-48cd-9d0d-24e8b905758c=VM]',
 sharedLocks='[0b64ced5-7e4b-48cd-9d0d-24e8b905758c=REMOTE_VM]'}'
 2018-04-23 13:31:44,349-05 ERROR
 [org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigur
 ationCommand]
 (default task-72) [6793fe73-7cda-4cb5-a806-7104a05c3c1b] Error during
 ValidateFailure.: java.lang.NullPointerException
 at
 org.ovirt.engine.core.bll.validator.ImportValidator.validate
 StorageExistsForMemoryDisks(ImportValidator.java:140)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigura
 tionCommand.isValidDisks(ImportVmFromConfigurationCommand.java:151)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.exportimport.ImportVmFromConfigura
 tionCommand.validate(ImportVmFromConfigurationCommand.java:103)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.CommandBase.internalValidate(Comma
 ndBase.java:779)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.CommandBase.validateOnly(CommandBa
 se.java:368)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
 .canRunActions(PrevalidatingMultipleActionsRunner.java:113)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
 .invokeCommands(PrevalidatingMultipleActionsRunner.java:99)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner
 .execute(PrevalidatingMultipleActionsRunner.java:76)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.Backend.runMultipleActionsImpl(Bac
 kend.java:596)
 [bll.jar:]
 at
 org.ovirt.engine.core.bll.Backend.runMultipleActions(Backend.java:566)
 [bll.jar:]
 at sun.reflect.GeneratedMethodAccessor914.invoke(Unknown
 Source)
 [:1.8.0_161]
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
 thodAccessorImpl.java:43)
 [rt.jar:1.8.0_161]
 at java.lang.reflect.Method.invoke(Method.java:498)
 [rt.jar:1.8.0_161]
 at
 org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.
 processInvocation(ManagedReferenceMethodInterceptor.java:52)
 at
 org.jboss.invocation.InterceptorContext.proceed(InterceptorC
 ontext.java:422)
 at
 org.jboss.invocation.InterceptorContext$Invocation.proceed(
 InterceptorContext.java:509)
 at
 org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.del
 egateInterception(Jsr299BindingsInterceptor.java:78)
 at
 

Re: [ovirt-users] Attaching disk returns "Bad Request"

2018-04-25 Thread Maor Lipchuk
Hi Marcin,

It seems to be availalbe since Apr 18th (see [1]), so I think you could try
to upgrade and check if that fixes your problem.

[1] https://www.ovirt.org/release/4.2.3/

Regards,
Maor

On Wed, Apr 25, 2018 at 3:03 PM, Marcin Kubacki <m.kuba...@storware.eu>
wrote:

> Hi Maor,
>
> I think it was 4.2.2 - and when 4.2.3 is supposed to be released?
>
>
> Pozdrawiam / Best Regards
>
> Marcin Kubacki
> Chief Software Architect
>
> e-mail: m.kuba...@storware.eu <j.sobieszczan...@storware.eu>
> mobile: +48 730-602-612
>
> *ul. Leszno 8/44*
> *01-192 Warszawa *
> *www.storware.eu <https://www.storware.eu/>*
>
>
>
>
> Wiadomość napisana przez Maor Lipchuk <mlipc...@redhat.com> w dniu
> 25.04.2018, o godz. 13:55:
>
> Hi Marcin and Hanna,
>
> I've found the following bug https://bugzilla.redhat.com/1546832 which
> probably fix that issue,
> The fix was introduced in oVirt 4.2.3.2
> Which version are you using?
>
> Regards,
> Maor
>
>
>
>
> On Wed, Apr 25, 2018 at 11:24 AM, Maor Lipchuk <mlipc...@redhat.com>
> wrote:
>
>> Hi Marcin,
>>
>> Can you please also attach the VDSM log so we can see the exact XML that
>> is being sent to libvirt.
>> (I'm replying on the users list, let's continue to discuss it here)
>>
>> Regards,
>> Maor
>>
>> On Tue, Apr 24, 2018 at 5:20 PM, Marcin Kubacki <m.kuba...@storware.eu>
>> wrote:
>>
>>> Hi Maor,
>>>
>>> it looks like the problem is deeper, potentially related to how the VM
>>> storage is configured (not to the request itself),
>>> and it is the first time we’ve seen something like this. Customer has
>>> oVirt 4.2, and this wasn’t happening in older environments.
>>>
>>> Can you help us solving this issue?
>>>
>>>
>>>
>>> Pozdrawiam / Best Regards
>>>
>>> Marcin Kubacki
>>> Chief Software Architect
>>>
>>> e-mail: m.kuba...@storware.eu <j.sobieszczan...@storware.eu>
>>> mobile: +48 730-602-612
>>>
>>> *ul. Leszno 8/44*
>>> *01-192 Warszawa *
>>> *www.storware.eu <https://www.storware.eu/>*
>>>
>>>
>>>
>>>
>>> Wiadomość napisana przez Hanna Terentieva <h.terenti...@storware.eu> w
>>> dniu 23.04.2018, o godz. 10:33:
>>>
>>> Hi Maor,
>>>
>>> My request looks correct and as for ovirt mailing list
>>> there seems to be a similar issue with VDSM failure here
>>> https://www.mail-archive.com/users@ovirt.org/msg45797.html.
>>> Have you possibly encountered a similar issue?
>>>
>>> On Sun, 2018-04-22 at 10:47 +0300, Maor Lipchuk wrote:
>>>
>>> Hi Hanna,
>>>
>>> If the diskalready exists you should use PUT method with the following
>>> URL:
>>> PUT /vms/{vm:id}/disksattachments/{attachment:id}
>>>
>>> You can take a look at the following documentation for more details:
>>>   https://access.redhat.com/documentation/en-us/red_hat_virtua
>>> lization/4.1/html-single/rest_api_guide/#services-disk_attac
>>> hment-methods-update
>>>
>>> If you already done this and still have a problem, I suggest to send
>>> this to the ovirt users maliing list, so you can get a more appropriate
>>> guidence there.
>>>
>>> Regards,
>>> Maor
>>>
>>> On Fri, Apr 20, 2018 at 11:06 AM, Hanna Terentieva <
>>> h.terenti...@storware.eu> wrote:
>>>
>>> Hi Maor,
>>>
>>> We encountered an issue with hotplugging disks when trying to attach a
>>> disk to client's envirtonment.
>>> The disk attachment request:
>>>
>>> 
>>> false
>>> virtio
>>> true
>>> 
>>> 
>>>
>>> returns this error:
>>>
>>> Error: HTTP response code is "400". HTTP response message is "Bad
>>> Request".
>>> org.ovirt.engine.sdk4.internal.services.ServiceImpl.throwErr
>>> or(ServiceImpl.java:113)
>>> org.ovirt.engine.sdk4.internal.services.ServiceImpl.checkFau
>>> lt(ServiceImpl.java:40)
>>> org.ovirt.engine.sdk4.internal.services.DiskAttachmentsServi
>>> ceImpl$AddRequestImpl.send(DiskAttachmentsServiceImpl.java:111)
>>> org.ovirt.engine.sdk4.internal.services.DiskAttachmentsServi
>>> ceImpl$AddRequestImpl.send(DiskAttachmentsServiceImpl.java:48)
>>>
>>> oVirt version 4.2.2.
>>> The issue happened around 2018

Re: [ovirt-users] Attaching disk returns "Bad Request"

2018-04-25 Thread Maor Lipchuk
Hi Marcin and Hanna,

I've found the following bug https://bugzilla.redhat.com/1546832 which
probably fix that issue,
The fix was introduced in oVirt 4.2.3.2
Which version are you using?

Regards,
Maor




On Wed, Apr 25, 2018 at 11:24 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:

> Hi Marcin,
>
> Can you please also attach the VDSM log so we can see the exact XML that
> is being sent to libvirt.
> (I'm replying on the users list, let's continue to discuss it here)
>
> Regards,
> Maor
>
> On Tue, Apr 24, 2018 at 5:20 PM, Marcin Kubacki <m.kuba...@storware.eu>
> wrote:
>
>> Hi Maor,
>>
>> it looks like the problem is deeper, potentially related to how the VM
>> storage is configured (not to the request itself),
>> and it is the first time we’ve seen something like this. Customer has
>> oVirt 4.2, and this wasn’t happening in older environments.
>>
>> Can you help us solving this issue?
>>
>>
>>
>> Pozdrawiam / Best Regards
>>
>> Marcin Kubacki
>> Chief Software Architect
>>
>> e-mail: m.kuba...@storware.eu <j.sobieszczan...@storware.eu>
>> mobile: +48 730-602-612
>>
>> *ul. Leszno 8/44*
>> *01-192 Warszawa *
>> *www.storware.eu <https://www.storware.eu/>*
>>
>>
>>
>>
>> Wiadomość napisana przez Hanna Terentieva <h.terenti...@storware.eu> w
>> dniu 23.04.2018, o godz. 10:33:
>>
>> Hi Maor,
>>
>> My request looks correct and as for ovirt mailing list
>> there seems to be a similar issue with VDSM failure here
>> https://www.mail-archive.com/users@ovirt.org/msg45797.html.
>> Have you possibly encountered a similar issue?
>>
>> On Sun, 2018-04-22 at 10:47 +0300, Maor Lipchuk wrote:
>>
>> Hi Hanna,
>>
>> If the diskalready exists you should use PUT method with the following
>> URL:
>> PUT /vms/{vm:id}/disksattachments/{attachment:id}
>>
>> You can take a look at the following documentation for more details:
>>   https://access.redhat.com/documentation/en-us/red_hat_virtua
>> lization/4.1/html-single/rest_api_guide/#services-disk_
>> attachment-methods-update
>>
>> If you already done this and still have a problem, I suggest to send this
>> to the ovirt users maliing list, so you can get a more appropriate guidence
>> there.
>>
>> Regards,
>> Maor
>>
>> On Fri, Apr 20, 2018 at 11:06 AM, Hanna Terentieva <
>> h.terenti...@storware.eu> wrote:
>>
>> Hi Maor,
>>
>> We encountered an issue with hotplugging disks when trying to attach a
>> disk to client's envirtonment.
>> The disk attachment request:
>>
>> 
>> false
>> virtio
>> true
>> 
>> 
>>
>> returns this error:
>>
>> Error: HTTP response code is "400". HTTP response message is "Bad
>> Request".
>> org.ovirt.engine.sdk4.internal.services.ServiceImpl.throwErr
>> or(ServiceImpl.java:113)
>> org.ovirt.engine.sdk4.internal.services.ServiceImpl.checkFau
>> lt(ServiceImpl.java:40)
>> org.ovirt.engine.sdk4.internal.services.DiskAttachmentsServi
>> ceImpl$AddRequestImpl.send(DiskAttachmentsServiceImpl.java:111)
>> org.ovirt.engine.sdk4.internal.services.DiskAttachmentsServi
>> ceImpl$AddRequestImpl.send(DiskAttachmentsServiceImpl.java:48)
>>
>> oVirt version 4.2.2.
>> The issue happened around 2018-04-17 07:57.
>> Attaching the log. Clould you please help with the problem?
>>
>> --
>>
>> Pozdrawiam\Best Regards
>> Hanna Terentieva
>> Junior Java Developer
>> e-mail: h.terenti...@storware.eu
>>
>> ** <http://www.storware.eu/>
>>
>>
>>
>> *ul. Leszno 8/44 01-192 Warszawa  www.storware.eu
>> <https://www.storware.eu/>*
>>
>> ** <https://www.facebook.com/storware>
>>
>> ** <https://twitter.com/storware>
>>
>> ** <https://www.linkedin.com/company/storware>
>>
>> **
>> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>>
>>
>>
>>
>>
>> --
>>
>> Pozdrawiam\Best Regards
>> Hanna Terentieva
>> Junior Java Developer
>> e-mail: h.terenti...@storware.eu
>>
>> ** <http://www.storware.eu/>
>>
>>
>>
>> *ul. Leszno 8/44 01-192 Warszawa  www.storware.eu
>> <https://www.storware.eu/>*
>>
>> ** <https://www.facebook.com/storware>
>>
>> ** <https://twitter.com/storware>
>>
>> ** <https://www.linkedin.com/company/storware>
>>
>> **
>> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>>
>>
>>
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Attaching disk returns "Bad Request"

2018-04-25 Thread Maor Lipchuk
Hi Marcin,

Can you please also attach the VDSM log so we can see the exact XML that is
being sent to libvirt.
(I'm replying on the users list, let's continue to discuss it here)

Regards,
Maor

On Tue, Apr 24, 2018 at 5:20 PM, Marcin Kubacki <m.kuba...@storware.eu>
wrote:

> Hi Maor,
>
> it looks like the problem is deeper, potentially related to how the VM
> storage is configured (not to the request itself),
> and it is the first time we’ve seen something like this. Customer has
> oVirt 4.2, and this wasn’t happening in older environments.
>
> Can you help us solving this issue?
>
>
>
> Pozdrawiam / Best Regards
>
> Marcin Kubacki
> Chief Software Architect
>
> e-mail: m.kuba...@storware.eu <j.sobieszczan...@storware.eu>
> mobile: +48 730-602-612
>
> *ul. Leszno 8/44*
> *01-192 Warszawa *
> *www.storware.eu <https://www.storware.eu/>*
>
>
>
>
> Wiadomość napisana przez Hanna Terentieva <h.terenti...@storware.eu> w
> dniu 23.04.2018, o godz. 10:33:
>
> Hi Maor,
>
> My request looks correct and as for ovirt mailing list
> there seems to be a similar issue with VDSM failure here
> https://www.mail-archive.com/users@ovirt.org/msg45797.html.
> Have you possibly encountered a similar issue?
>
> On Sun, 2018-04-22 at 10:47 +0300, Maor Lipchuk wrote:
>
> Hi Hanna,
>
> If the diskalready exists you should use PUT method with the following URL:
> PUT /vms/{vm:id}/disksattachments/{attachment:id}
>
> You can take a look at the following documentation for more details:
>   https://access.redhat.com/documentation/en-us/red_hat_
> virtualization/4.1/html-single/rest_api_guide/#services-disk_attachment-
> methods-update
>
> If you already done this and still have a problem, I suggest to send this
> to the ovirt users maliing list, so you can get a more appropriate guidence
> there.
>
> Regards,
> Maor
>
> On Fri, Apr 20, 2018 at 11:06 AM, Hanna Terentieva <
> h.terenti...@storware.eu> wrote:
>
> Hi Maor,
>
> We encountered an issue with hotplugging disks when trying to attach a
> disk to client's envirtonment.
> The disk attachment request:
>
> 
> false
> virtio
> true
> 
> 
>
> returns this error:
>
> Error: HTTP response code is "400". HTTP response message is "Bad Request".
> org.ovirt.engine.sdk4.internal.services.ServiceImpl.throwErr
> or(ServiceImpl.java:113)
> org.ovirt.engine.sdk4.internal.services.ServiceImpl.checkFau
> lt(ServiceImpl.java:40)
> org.ovirt.engine.sdk4.internal.services.DiskAttachmentsServi
> ceImpl$AddRequestImpl.send(DiskAttachmentsServiceImpl.java:111)
> org.ovirt.engine.sdk4.internal.services.DiskAttachmentsServi
> ceImpl$AddRequestImpl.send(DiskAttachmentsServiceImpl.java:48)
>
> oVirt version 4.2.2.
> The issue happened around 2018-04-17 07:57.
> Attaching the log. Clould you please help with the problem?
>
> --
>
> Pozdrawiam\Best Regards
> Hanna Terentieva
> Junior Java Developer
> e-mail: h.terenti...@storware.eu
>
> ** <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa  www.storware.eu
> <https://www.storware.eu/>*
>
> ** <https://www.facebook.com/storware>
>
> ** <https://twitter.com/storware>
>
> ** <https://www.linkedin.com/company/storware>
>
> **
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
>
>
> --
>
> Pozdrawiam\Best Regards
> Hanna Terentieva
> Junior Java Developer
> e-mail: h.terenti...@storware.eu
>
> ** <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa  www.storware.eu
> <https://www.storware.eu/>*
>
> ** <https://www.facebook.com/storware>
>
> ** <https://twitter.com/storware>
>
> ** <https://www.linkedin.com/company/storware>
>
> **
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Disk Import Error

2018-04-22 Thread Maor Lipchuk
It might be that this disk already exists in the setup?
Can you please try to search it by its id (see attached file)

Regards,
Maor

On Sun, Apr 22, 2018 at 7:57 AM, Idan Shaby  wrote:

> Maor, any idea?
>
>
> Regards,
> Idan
>
> On Thu, Apr 19, 2018 at 10:09 AM, Николаев Алексей <
> alexeynikolaev.p...@yandex.ru> wrote:
>
>> Hi community!
>>
>> I have another issue when try import VM disks after partial VM Import.
>>
>>
>>
>> 2018-04-19 10:05:44,561+03 INFO  [org.ovirt.engine.core.bll.st
>> orage.disk.image.RegisterDiskCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[7be49698-f3a5-4995-b411-f0490a819950=DISK]',
>> sharedLocks=''}'
>> 2018-04-19 10:05:44,599+03 INFO  [org.ovirt.engine.core.vdsbro
>> ker.irsbroker.GetVolumesListVDSCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] START, GetVolumesListVDSCommand(
>> StoragePoolDomainAndGroupIdBaseVDSCommandParameters:{storage
>> PoolId='45fe4bb5-0b53-4852-ba77-a070c21057f0',
>> ignoreFailoverLimit='false', 
>> storageDomainId='87c300c0-3903-4c27-9956-856c8fbdf4c2',
>> imageGroupId='7be49698-f3a5-4995-b411-f0490a819950'}), log id: fbfb25
>> 2018-04-19 10:05:44,741+03 INFO  [org.ovirt.engine.core.vdsbro
>> ker.irsbroker.GetVolumesListVDSCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] FINISH, GetVolumesListVDSCommand,
>> return: [fd8822ee-4fc9-49ba-9760-87a85d56bf91], log id: fbfb25
>> 2018-04-19 10:05:44,743+03 INFO  [org.ovirt.engine.core.vdsbro
>> ker.irsbroker.GetImageInfoVDSCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] START, GetImageInfoVDSCommand(
>> GetImageInfoVDSCommandParameters:{storagePoolId='45fe4bb5-0b53-4852-ba77-a070c21057f0',
>> ignoreFailoverLimit='false', 
>> storageDomainId='87c300c0-3903-4c27-9956-856c8fbdf4c2',
>> imageGroupId='7be49698-f3a5-4995-b411-f0490a819950',
>> imageId='fd8822ee-4fc9-49ba-9760-87a85d56bf91'}), log id: 658b5029
>> 2018-04-19 10:05:44,746+03 INFO  [org.ovirt.engine.core.vdsbro
>> ker.vdsbroker.GetVolumeInfoVDSCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] START,
>> GetVolumeInfoVDSCommand(HostName = node08-01,
>> GetVolumeInfoVDSCommandParameters:{hostId='3663c31f-c099-4823-83bd-3714b20c3d5a',
>> storagePoolId='45fe4bb5-0b53-4852-ba77-a070c21057f0',
>> storageDomainId='87c300c0-3903-4c27-9956-856c8fbdf4c2',
>> imageGroupId='7be49698-f3a5-4995-b411-f0490a819950',
>> imageId='fd8822ee-4fc9-49ba-9760-87a85d56bf91'}), log id: 1a2f86ea
>> 2018-04-19 10:05:44,851+03 INFO  [org.ovirt.engine.core.vdsbro
>> ker.vdsbroker.GetVolumeInfoVDSCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] FINISH, GetVolumeInfoVDSCommand,
>> return: org.ovirt.engine.core.common.businessentities.storage.DiskIm
>> age@8b2e6cf1, log id: 1a2f86ea
>> 2018-04-19 10:05:44,851+03 INFO  [org.ovirt.engine.core.vdsbro
>> ker.irsbroker.GetImageInfoVDSCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] FINISH, GetImageInfoVDSCommand,
>> return: org.ovirt.engine.core.common.businessentities.storage.DiskIm
>> age@8b2e6cf1, log id: 658b5029
>> 2018-04-19 10:05:44,865+03 INFO  [org.ovirt.engine.core.bll.st
>> orage.disk.image.ImagesHandler] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] Disk alias retrieved from the
>> client is null or empty, the suggested default disk alias to be used is
>> 'RegisteredDisk_2018-04-19_10-05-44'
>> 2018-04-19 10:05:44,865+03 WARN  [org.ovirt.engine.core.bll.st
>> orage.disk.image.RegisterDiskCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] Validation of action
>> 'RegisterDisk' failed for user NikolaevAA@. Reasons:
>> VAR__ACTION__IMPORT,VAR__TYPE__DISK,$diskAliases
>> RegisteredDisk_2018-04-19_10-05-44,ACTION_TYPE_FAILED_DISKS_LOCKED
>> 2018-04-19 10:05:44,866+03 INFO  [org.ovirt.engine.core.bll.st
>> orage.disk.image.RegisterDiskCommand] (default task-13)
>> [315a9cfb-db88-48bd-87ed-2d50fdba2fb2] Lock freed to object
>> 'EngineLock:{exclusiveLocks='[7be49698-f3a5-4995-b411-f0490a819950=DISK]',
>> sharedLocks=''}'
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs with multiple vdisks don't migrate

2018-02-22 Thread Maor Lipchuk
I encountered a bug (see [1]) which contains the same error mentioned in
your VDSM logs (see [2]), but I doubt it is related.
Milan, maybe you have any advice to troubleshoot the issue? Will the
libvirt/qemu logs can help?
I would suggest to open a bug on that issue so we can track it more
properly.

Regards,
Maor


[1]
https://bugzilla.redhat.com/show_bug.cgi?id=1486543 -  Migration leads to
VM running on 2 Hosts

[2]
2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer]
Internal server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572,
in _handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
_dynamicMethod
result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
  File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies
'current_values': v.getIoTune()}
  File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
result = self.getIoTuneResponse()
  File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
res = self._dom.blockIoTune(
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47,
in __getattr__
% self.vmid)
NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not
started yet or was shut down

On Thu, Feb 22, 2018 at 4:22 PM, fsoyer <fso...@systea.fr> wrote:

> Hi,
> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger
> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5),
> while the engine.log in the first mail on 2018-02-12 was for VMs standing
> on victor, migrated (or failed to migrate...) to ginger. Symptoms were
> exactly the same, in both directions, and VMs works like a charm before,
> and even after (migration "killed" by a poweroff of VMs).
> Am I the only one experimenting this problem ?
>
>
> Thanks
> --
>
> Cordialement,
>
> *Frank Soyer *
>
>
>
> Le Jeudi, Février 22, 2018 00:45 CET, Maor Lipchuk <mlipc...@redhat.com>
> a écrit:
>
>
> Hi Frank,
>
> Sorry about the delay repond.
> I've been going through the logs you attached, although I could not find
> any specific indication why the migration failed because of the disk you
> were mentionning.
> Does this VM run with both disks on the target host without migration?
>
> Regards,
> Maor
>
>
> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fso...@systea.fr> wrote:
>>
>> Hi Maor,
>> sorry for the double post, I've change the email adress of my account and
>> supposed that I'd need to re-post it.
>> And thank you for your time. Here are the logs. I added a vdisk to an
>> existing VM : it no more migrates, needing to poweroff it after minutes.
>> Then simply deleting the second disk makes migrate it in exactly 9s without
>> problem !
>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561
>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
>>
>> --
>>
>> Cordialement,
>>
>> *Frank Soyer *
>> Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk <
>> mlipc...@redhat.com> a écrit:
>>
>>
>> Hi Frank,
>>
>> I already replied on your last email.
>> Can you provide the VDSM logs from the time of the migration failure for
>> both hosts:
>>   ginger.local.systea.f <http://ginger.local.systea.fr/>r and v
>> ictor.local.systea.fr
>>
>> Thanks,
>> Maor
>>
>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fso...@systea.fr> wrote:
>>>
>>> Hi all,
>>> I discovered yesterday a problem when migrating VM with more than one
>>> vdisk.
>>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2
>>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I
>>> added a 100G vdisk (for this tests I didn't want to waste time to extend
>>> the existing vdisks... But I lost time finally...). The VMs with the 2
>>> vdisks works well.
>>> Now I saw some updates waiting on the host. I tried to put it in
>>> maintenance... But it stopped on the two VM. They were marked "migrating",
>>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated
>>> without problem at the same time.
>>> I saw that a kvm process for the (big) VMs was launched on the source
>>> AND destination host, but after tens of minutes, the migration and the VMs
>>> was always freezed. I tried to cancel the migr

Re: [ovirt-users] VMs with multiple vdisks don't migrate

2018-02-21 Thread Maor Lipchuk
Hi Frank,

Sorry about the delay repond.
I've been going through the logs you attached, although I could not find
any specific indication why the migration failed because of the disk you
were mentionning.
Does this VM run with both disks on the target host without migration?

Regards,
Maor


On Fri, Feb 16, 2018 at 11:03 AM, fsoyer <fso...@systea.fr> wrote:

> Hi Maor,
> sorry for the double post, I've change the email adress of my account and
> supposed that I'd need to re-post it.
> And thank you for your time. Here are the logs. I added a vdisk to an
> existing VM : it no more migrates, needing to poweroff it after minutes.
> Then simply deleting the second disk makes migrate it in exactly 9s without
> problem !
> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561
> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d
>
> --
>
> Cordialement,
>
> *Frank Soyer *
> Le Mercredi, Février 14, 2018 11:04 CET, Maor Lipchuk <mlipc...@redhat.com>
> a écrit:
>
>
> Hi Frank,
>
> I already replied on your last email.
> Can you provide the VDSM logs from the time of the migration failure for
> both hosts:
>   ginger.local.systea.f <http://ginger.local.systea.fr/>r and v
> ictor.local.systea.fr
>
> Thanks,
> Maor
>
> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer <fso...@systea.fr> wrote:
>>
>> Hi all,
>> I discovered yesterday a problem when migrating VM with more than one
>> vdisk.
>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2
>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I
>> added a 100G vdisk (for this tests I didn't want to waste time to extend
>> the existing vdisks... But I lost time finally...). The VMs with the 2
>> vdisks works well.
>> Now I saw some updates waiting on the host. I tried to put it in
>> maintenance... But it stopped on the two VM. They were marked "migrating",
>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated
>> without problem at the same time.
>> I saw that a kvm process for the (big) VMs was launched on the source AND
>> destination host, but after tens of minutes, the migration and the VMs was
>> always freezed. I tried to cancel the migration for the VMs : failed. The
>> only way to stop it was to poweroff the VMs : the kvm process died on the 2
>> hosts and the GUI alerted on a failed migration.
>> In doubt, I tried to delete the second vdisk on one of this VMs : it
>> migrates then without error ! And no access problem.
>> I tried to extend the first vdisk of the second VM, the delete the second
>> vdisk : it migrates now without problem !
>>
>> So after another test with a VM with 2 vdisks, I can say that this
>> blocked the migration process :(
>>
>> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>>
>> 2018-02-12 16:46:29,705+01 INFO  
>> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired
>> to object 
>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]',
>> sharedLocks=''}'
>> 2018-02-12 16:46:29,955+01 INFO  
>> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
>> Running command: MigrateVmToServerCommand internal: false. Entities
>> affected :  ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction
>> group MIGRATE_VM with role type USER
>> 2018-02-12 16:46:30,261+01 INFO  
>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
>> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
>> maxIncomingMigrations='2', maxOutgoingMigrations='2',
>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
>> action={name=setDowntime, params=[200]}}, {limit=3,
>> action={name=setDowntime, params=[300]}}, {limit=4,
>> action={name=setDowntime, params=[400]}}, {limit=6,
>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
>> params=[]}}]]'})

Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-18 Thread Maor Lipchuk
Ala,

IIUC you mentioned that locked snapshot can still be removed.
Can you please guide how to do that?

Regards,
Maor

On Fri, Feb 16, 2018 at 10:50 AM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> After reboot engine virtual machine task disappear but virtual disk is
> still locked ,
> any ideas to remove that lock ?
> Thanks again.
> Enrico
>
>
> l 16/02/2018 09:45, Enrico Becchetti ha scritto:
>
>Dear All,
> Are there tools to remove this task (in attach) ?
>
> taskcleaner.sh it's seems doens't work:
>
> [root@ovirt-new dbutils]# ./taskcleaner.sh -v -r
> select exists (select * from information_schema.tables where table_schema
> = 'public' and table_name = 'command_entities');
>  t
> SELECT DeleteAllCommands();
>  6
> [root@ovirt-new dbutils]# ./taskcleaner.sh -v -R
> select exists (select * from information_schema.tables where table_schema
> = 'public' and table_name = 'command_entities');
>  t
>  This will remove all async_tasks table content!!!
> Caution, this operation should be used with care. Please contact support
> prior to running this command
> Are you sure you want to proceed? [y/n]
> y
> TRUNCATE TABLE async_tasks cascade;
> TRUNCATE TABLE
>
> after that I see the same running tasks . Does It make sense ?
>
> Thanks
> Best Regards
> Enrico
>
>
> Il 14/02/2018 15:53, Enrico Becchetti ha scritto:
>
> Dear All,
> old snapsahots seem to be the problem. In fact domain DATA_FC running in
> 3.5 had some
> lvm snapshot volume. Before deactivate DATA_FC  I didin't remove this
> snapshots so when
> I attach this volume to new ovirt 4.2 and import all vm at the same time I
> also import
> all snapshots but now How I can remove them ? Throught ovirt web interface
> the remove
> tasks running are still hang. Are there any other methods ?
> Thank to following this case.
> Best Regads
> Enrico
>
> Il 14/02/2018 14:34, Maor Lipchuk ha scritto:
>
> Seems like all the engine logs are full with the same error.
> From vdsm.log.16.xz I can see an error which might explain this failure:
>
> 2018-02-12 07:51:16,161+0100 INFO  (ioprocess communication (40573))
> [IOProcess] Starting ioprocess (__init__:447)
> 2018-02-12 07:51:16,201+0100 INFO  (jsonrpc/3) [vdsm.api] FINISH
> mergeSnapshots return=None from=:::10.0.0.46,57032,
> flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, 
> task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0
> (api:52)
> 2018-02-12 07:51:16,275+0100 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call Image.mergeSnapshots succeeded in 0.13 seconds (__init__:573)
> 2018-02-12 07:51:16,276+0100 INFO  (tasks/1) [storage.ThreadPool.WorkerThread]
> START task 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd= Task.commit of >,
> args=None) (threadPool:208)
> 2018-02-12 07:51:16,543+0100 INFO  (tasks/1) [storage.Image]
> sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID=
> imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 
> ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825
> successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False
> discard=False (image:1240)
> 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) [storage.TaskManager.Task]
> (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
> in run
> return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
> 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1853,
> in mergeSnapshots
> discard)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line
> 1251, in merge
> srcVol = vols[successor]
> KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd'
>
> Ala, maybe you know if there is any known issue with mergeSnapshots?
> The usecase here are VMs from oVirt 3.5 which got registered to oVirt 4.2.
>
> Regards,
> Maor
>
>
> On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>>   Hi,
>> also you can download them throught these
>> links:
>>
>> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD
>> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb
>>
>> Thanks again 
>>
>> Best Regards
>> Enrico
>>
>> Il 13/02/2018 14:52, Maor Lipchuk ha scritto:
>>
>>
>>
>> On Tue, Feb 13, 2018 at 3:51 PM

Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-14 Thread Maor Lipchuk
Seems like all the engine logs are full with the same error.
>From vdsm.log.16.xz I can see an error which might explain this failure:

2018-02-12 07:51:16,161+0100 INFO  (ioprocess communication (40573))
[IOProcess] Starting ioprocess (__init__:447)
2018-02-12 07:51:16,201+0100 INFO  (jsonrpc/3) [vdsm.api] FINISH
mergeSnapshots return=None from=:::10.0.0.46,57032,
flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568,
task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (api:52)
2018-02-12 07:51:16,275+0100 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Image.mergeSnapshots succeeded in 0.13 seconds (__init__:573)
2018-02-12 07:51:16,276+0100 INFO  (tasks/1)
[storage.ThreadPool.WorkerThread] START task
1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd=>, args=None)
(threadPool:208)
2018-02-12 07:51:16,543+0100 INFO  (tasks/1) [storage.Image]
sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID=
imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5
ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825
successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False discard=False
(image:1240)
2018-02-12 07:51:16,669+0100 ERROR (tasks/1) [storage.TaskManager.Task]
(Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
in run
return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1853, in
mergeSnapshots
discard)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 1251,
in merge
srcVol = vols[successor]
KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd'

Ala, maybe you know if there is any known issue with mergeSnapshots?
The usecase here are VMs from oVirt 3.5 which got registered to oVirt 4.2.

Regards,
Maor


On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

>   Hi,
> also you can download them throught these
> links:
>
> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD
> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb
>
> Thanks again 
>
> Best Regards
> Enrico
>
> Il 13/02/2018 14:52, Maor Lipchuk ha scritto:
>
>
>
> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>
>>
>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>> see the attach files please ... thanks for your attention !!!
>>>
>>
>>
>> Seems like the engine logs does not contain the entire process, can you
>> please share older logs since the import operation?
>>
>
> And VDSM logs as well from your host
>
>
>>
>>
>>> Best Regards
>>> Enrico
>>>
>>>
>>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto:
>>>
>>>
>>>
>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
>>> enrico.becche...@pg.infn.it> wrote:
>>>
>>>>  Dear All,
>>>> I have been using ovirt for a long time with three hypervisors and an
>>>> external engine running in a centos vm .
>>>>
>>>> This three hypervisors have HBAs and access to fiber channel storage.
>>>> Until recently I used version 3.5, then I reinstalled everything from
>>>> scratch and now I have 4.2.
>>>>
>>>> Before formatting everything, I detach the storage data domani (FC)
>>>> with the virtual machines and reimported it to the new 4.2 and all went
>>>> well. In
>>>> this domain there were virtual machines with and without snapshots.
>>>>
>>>> Now I have two problems. The first is that if I try to delete a
>>>> snapshot the process is not end successful and remains hanging and the
>>>> second problem is that
>>>> in one case I lost the virtual machine !!!
>>>>
>>>
>>>
>>> Not sure that I fully understand the scneario.'
>>> How was the virtual machine got lost if you only tried to delete a
>>> snapshot?
>>>
>>>
>>>>
>>>> So I need your help to kill the three running zombie tasks because with
>>>> taskcleaner.sh I can't do anything and then I need to know how I can delete
>>>> the old snapshots
>>>> made with the 3.5 without losing other data or without having new
>>>> processes that terminate correctly.
&

Re: [ovirt-users] VMs with multiple vdisks don't migrate

2018-02-14 Thread Maor Lipchuk
Hi Frank,

I already replied on your last email.
Can you provide the VDSM logs from the time of the migration failure for
both hosts:
  ginger.local.systea.f r and v
ictor.local.systea.fr

Thanks,
Maor

On Wed, Feb 14, 2018 at 11:23 AM, fsoyer  wrote:

> Hi all,
> I discovered yesterday a problem when migrating VM with more than one
> vdisk.
> On our test servers (oVirt4.1, shared storage with Gluster), I created 2
> VMs needed for a test, from a template with a 20G vdisk. On this VMs I
> added a 100G vdisk (for this tests I didn't want to waste time to extend
> the existing vdisks... But I lost time finally...). The VMs with the 2
> vdisks works well.
> Now I saw some updates waiting on the host. I tried to put it in
> maintenance... But it stopped on the two VM. They were marked "migrating",
> but no more accessible. Other (small) VMs with only 1 vdisk was migrated
> without problem at the same time.
> I saw that a kvm process for the (big) VMs was launched on the source AND
> destination host, but after tens of minutes, the migration and the VMs was
> always freezed. I tried to cancel the migration for the VMs : failed. The
> only way to stop it was to poweroff the VMs : the kvm process died on the 2
> hosts and the GUI alerted on a failed migration.
> In doubt, I tried to delete the second vdisk on one of this VMs : it
> migrates then without error ! And no access problem.
> I tried to extend the first vdisk of the second VM, the delete the second
> vdisk : it migrates now without problem !
>
> So after another test with a VM with 2 vdisks, I can say that this blocked
> the migration process :(
>
> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>
> 2018-02-12 16:46:29,705+01 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to
> object 
> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]',
> sharedLocks=''}'
> 2018-02-12 16:46:29,955+01 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
> Running command: MigrateVmToServerCommand internal: false. Entities
> affected :  ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group
> MIGRATE_VM with role type USER
> 2018-02-12 16:46:30,261+01 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
> maxIncomingMigrations='2', maxOutgoingMigrations='2',
> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
> action={name=setDowntime, params=[200]}}, {limit=3,
> action={name=setDowntime, params=[300]}}, {limit=4,
> action={name=setDowntime, params=[400]}}, {limit=6,
> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
> params=[]}}]]'}), log id: 14f61ee0
> 2018-02-12 16:46:30,262+01 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
> (org.ovirt.thread.pool-6-thread-32)
> [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName
> = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true',
> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
> maxIncomingMigrations='2', maxOutgoingMigrations='2',
> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
> action={name=setDowntime, params=[200]}}, {limit=3,
> action={name=setDowntime, params=[300]}}, {limit=4,
> action={name=setDowntime, params=[400]}}, {limit=6,
> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
> params=[]}}]]'}), log id: 775cd381
> 2018-02-12 16:46:30,277+01 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
> (org.ovirt.thread.pool-6-thread-32)
> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,
> log id: 775cd381
> 2018-02-12 16:46:30,285+01 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> 

Re: [ovirt-users] VM with multiple vdisks can't migrate

2018-02-14 Thread Maor Lipchuk
Hi Frank,

Can you please attach the VDSM logs from the time of the migration failure
for both hosts:
  ginger.local.systea.f r and v
ictor.local.systea.fr

Thanks,
Maor

On Tue, Feb 13, 2018 at 12:07 PM, fsoyer  wrote:

> Hi all,
> I discovered yesterday a problem when migrating VM with more than one
> vdisk.
> On our test servers (oVirt4.1, shared storage with Gluster), I created 2
> VMs needed for a test, from a template with a 20G vdisk. On this VMs I
> added a 100G vdisk (for this tests I didn't want to waste time to extend
> the existing vdisks... But I lost time finally...). The VMs with the 2
> vdisks works well.
> Now I saw some updates waiting on the host. I tried to put it in
> maintenance... But it stopped on the two VM. They were marked "migrating",
> but no more accessible. Other (small) VMs with only 1 vdisk was migrated
> without problem at the same time.
> I saw that a kvm process for the (big) VMs was launched on the source AND
> destination host, but after tens of minutes, the migration and the VMs was
> always freezed. I tried to cancel the migration for the VMs : failed. The
> only way to stop it was to poweroff the VMs : the kvm process died on the 2
> hosts and the GUI alerted on a failed migration.
> In doubt, I tried to delete the second vdisk on one of this VMs : it
> migrates then without error ! And no access problem.
> I tried to extend the first vdisk of the second VM, the delete the second
> vdisk : it migrates now without problem !
>
> So after another test with a VM with 2 vdisks, I can say that this blocked
> the migration process :(
>
> In engine.log, for a VMs with 1 vdisk migrating well, we see :
>
> 2018-02-12 16:46:29,705+01 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to
> object 
> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]',
> sharedLocks=''}'
> 2018-02-12 16:46:29,955+01 INFO  
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
> Running command: MigrateVmToServerCommand internal: false. Entities
> affected :  ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group
> MIGRATE_VM with role type USER
> 2018-02-12 16:46:30,261+01 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725]
> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
> maxIncomingMigrations='2', maxOutgoingMigrations='2',
> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
> action={name=setDowntime, params=[200]}}, {limit=3,
> action={name=setDowntime, params=[300]}}, {limit=4,
> action={name=setDowntime, params=[400]}}, {limit=6,
> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
> params=[]}}]]'}), log id: 14f61ee0
> 2018-02-12 16:46:30,262+01 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
> (org.ovirt.thread.pool-6-thread-32)
> [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName
> = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true',
> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1',
> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6',
> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='
> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false',
> migrationDowntime='0', autoConverge='true', migrateCompressed='false',
> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true',
> maxIncomingMigrations='2', maxOutgoingMigrations='2',
> convergenceSchedule='[init=[{name=setDowntime, params=[100]}],
> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2,
> action={name=setDowntime, params=[200]}}, {limit=3,
> action={name=setDowntime, params=[300]}}, {limit=4,
> action={name=setDowntime, params=[400]}}, {limit=6,
> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
> params=[]}}]]'}), log id: 775cd381
> 2018-02-12 16:46:30,277+01 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
> (org.ovirt.thread.pool-6-thread-32)
> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,
> log id: 775cd381
> 2018-02-12 16:46:30,285+01 INFO  
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
> (org.ovirt.thread.pool-6-thread-32) 

Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Maor Lipchuk
On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:

>
> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>> see the attach files please ... thanks for your attention !!!
>>
>
>
> Seems like the engine logs does not contain the entire process, can you
> please share older logs since the import operation?
>

And VDSM logs as well from your host


>
>
>> Best Regards
>> Enrico
>>
>>
>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto:
>>
>>
>>
>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
>> enrico.becche...@pg.infn.it> wrote:
>>
>>>  Dear All,
>>> I have been using ovirt for a long time with three hypervisors and an
>>> external engine running in a centos vm .
>>>
>>> This three hypervisors have HBAs and access to fiber channel storage.
>>> Until recently I used version 3.5, then I reinstalled everything from
>>> scratch and now I have 4.2.
>>>
>>> Before formatting everything, I detach the storage data domani (FC) with
>>> the virtual machines and reimported it to the new 4.2 and all went well. In
>>> this domain there were virtual machines with and without snapshots.
>>>
>>> Now I have two problems. The first is that if I try to delete a snapshot
>>> the process is not end successful and remains hanging and the second
>>> problem is that
>>> in one case I lost the virtual machine !!!
>>>
>>
>>
>> Not sure that I fully understand the scneario.'
>> How was the virtual machine got lost if you only tried to delete a
>> snapshot?
>>
>>
>>>
>>> So I need your help to kill the three running zombie tasks because with
>>> taskcleaner.sh I can't do anything and then I need to know how I can delete
>>> the old snapshots
>>> made with the 3.5 without losing other data or without having new
>>> processes that terminate correctly.
>>>
>>> If you want some log files please let me know.
>>>
>>
>>
>> Hi Enrico,
>>
>> Can you please attach the engine and VDSM logs
>>
>>
>>>
>>> Thank you so much.
>>> Best Regards
>>> Enrico
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> --
>> ___
>>
>> Enrico BecchettiServizio di Calcolo e Reti
>>
>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
>> Via Pascoli,c/o Dipartimento di Fisica  06123 Perugia (ITALY)
>> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: 
>> Enrico.Becchettipg.infn.it
>> __
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Maor Lipchuk
On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

> see the attach files please ... thanks for your attention !!!
>


Seems like the engine logs does not contain the entire process, can you
please share older logs since the import operation?


> Best Regards
> Enrico
>
>
> Il 13/02/2018 14:09, Maor Lipchuk ha scritto:
>
>
>
> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
> enrico.becche...@pg.infn.it> wrote:
>
>>  Dear All,
>> I have been using ovirt for a long time with three hypervisors and an
>> external engine running in a centos vm .
>>
>> This three hypervisors have HBAs and access to fiber channel storage.
>> Until recently I used version 3.5, then I reinstalled everything from
>> scratch and now I have 4.2.
>>
>> Before formatting everything, I detach the storage data domani (FC) with
>> the virtual machines and reimported it to the new 4.2 and all went well. In
>> this domain there were virtual machines with and without snapshots.
>>
>> Now I have two problems. The first is that if I try to delete a snapshot
>> the process is not end successful and remains hanging and the second
>> problem is that
>> in one case I lost the virtual machine !!!
>>
>
>
> Not sure that I fully understand the scneario.'
> How was the virtual machine got lost if you only tried to delete a
> snapshot?
>
>
>>
>> So I need your help to kill the three running zombie tasks because with
>> taskcleaner.sh I can't do anything and then I need to know how I can delete
>> the old snapshots
>> made with the 3.5 without losing other data or without having new
>> processes that terminate correctly.
>>
>> If you want some log files please let me know.
>>
>
>
> Hi Enrico,
>
> Can you please attach the engine and VDSM logs
>
>
>>
>> Thank you so much.
>> Best Regards
>> Enrico
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> --
> ___
>
> Enrico BecchettiServizio di Calcolo e Reti
>
> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
> Via Pascoli,c/o Dipartimento di Fisica  06123 Perugia (ITALY)
> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: 
> Enrico.Becchettipg.infn.it
> __
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import Domain and snapshot issue ... please help !!!

2018-02-13 Thread Maor Lipchuk
On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:

>  Dear All,
> I have been using ovirt for a long time with three hypervisors and an
> external engine running in a centos vm .
>
> This three hypervisors have HBAs and access to fiber channel storage.
> Until recently I used version 3.5, then I reinstalled everything from
> scratch and now I have 4.2.
>
> Before formatting everything, I detach the storage data domani (FC) with
> the virtual machines and reimported it to the new 4.2 and all went well. In
> this domain there were virtual machines with and without snapshots.
>
> Now I have two problems. The first is that if I try to delete a snapshot
> the process is not end successful and remains hanging and the second
> problem is that
> in one case I lost the virtual machine !!!
>


Not sure that I fully understand the scneario.'
How was the virtual machine got lost if you only tried to delete a snapshot?


>
> So I need your help to kill the three running zombie tasks because with
> taskcleaner.sh I can't do anything and then I need to know how I can delete
> the old snapshots
> made with the 3.5 without losing other data or without having new
> processes that terminate correctly.
>
> If you want some log files please let me know.
>


Hi Enrico,

Can you please attach the engine and VDSM logs


>
> Thank you so much.
> Best Regards
> Enrico
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-08 Thread Maor Lipchuk
On Thu, Feb 8, 2018 at 10:34 AM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Tue, Feb 6, 2018 at 9:33 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> [cut]
> >> What i need is that informations about vms is replicated to the remote
> >> site with disk.
> >> In an older test i had the issue that disks were replicated to remote
> >> site, but vm configuration not!
> >> I've found disks in the "Disk"  tab of storage domain, but nothing on VM
> >> Import.
> >
> >
> >
> > Can you reproduce it and attach the logs of the setup before the disaster
> > and after the recovery?
> > That could happen in case of new created VMs and Templates which were not
> > yet updated in the OVF_STORE disk, since the OVF_STORE update process was
> > not running yet before the disaster.
> > Since the time of a disaster can't be anticipated, gaps like this might
> > happen.
> >
>
> I haven't tried the recovery yet using ansible. It was an experiment
> of possible procedure to be performed manually and was on 4.0.
> I asked about this unexpected behavior and Yaniv returned me that was
> due to OVF_STORE not updated and that in 4.1 there is an api call that
> updates OVF_STORE on demand.
>
> I'm creating a new setup today and i'll test again and check if i
> still hit the issue. Anyway if the problem persist i think that
> engine, for DR purposes, should upgrade the OVF_STORE as soon as
> possible when a new vm is created or has disks added.
>


If engine will update the OVF_STORE on any VM change that could reflect the
oVirt performance since it is a heavy operation,
although we do have some ideas to change that design so every VM change
will only change the VM OVF instead of the whole OVF_STORE disk.


> [cut]
> >>
> >> Ok, but if i keep master storage domain on a non replicate volume, do
> >> i require this function?
> >
> >
> > Basically it should also fail on VM/Template registration in oVirt 4.1
> since
> > there are also other functionalities like mapping of OVF attributes which
> > was added on VM/Templates registeration.
> >
>
> What do you mean? That i could fail to import any VM/Template? In what
> case?
>


If using the fail-over in ovirt-ansible-disaster-recovery, the VM/Template
registration process is being done automatically through the ovirt-ansible
tasks and it is based on the oVirt 4.2 API.
The task which registers the VMs and the Templates is being done there
without indicating the target cluster id, since in oVirt 4.2 we already
added the cluster name in the VM's/Template's OVF.
If your engine is oVirt 4.1 the registration will fail since in oVirt 4.1
the cluster id is mandatory.


>
> Another question:
>
> we have 2 DCs in main site, do we require to have also 2 DCs in
> recovery site o we can import all the storage domains in a single DC
> on recovery site? There could be uuid collisions or similar?
>


I think it could work, although I suggest that the clusters should be
compatible with those configurd in the primary setup
otherwise you might encounter problems when you will try to fail back (and
also to avoid any collisions of affinity groups/labels or networks).
For example if in your primary site you had DC1 with cluster1 and DC2 with
cluster2 then your secondary setup should be DC_Secondary with cluster_A
and cluster_B.
cluster1 will be mapped to cluster_A and cluster2 will be mapped to
cluster_B.

Another thing that might be troubling is with the master domain attribute
in the mapping var file.
That attribute indicates which storage domain is master or not.
Here is an example how it is being configured in the mapping file:
- dr_domain_type: nfs
  dr_primary_dc_name: Prod
  dr_primary_name: data_number
  dr_master_domain: True
  dr_secondary_dc_name: Recovery
  dr_secondary_address:
...


In your primary site you have two master storage domains, and in your
secondary site what will probably happen is that on import of storage
domains only one of those two storage domains will be master.
Now that I think of it, it might be better to configure the master
attribute for each of the setups, like so:
  dr_primary_master_domain: True
  dr_secondary_master_domain: False


>
> Thank you so much for your replies,
>
> Luca
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-06 Thread Maor Lipchuk
On Tue, Feb 6, 2018 at 11:32 AM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Mon, Feb 5, 2018 at 7:20 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> > Hi Luca,
> >
> > Thank you for your interst in the Disaster Recovery ansible solution, it
> is
> > great to see users get familiar with it.
> > Please see my comments inline
> >
> > Regards,
> > Maor
> >
> > On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >>
> >>
> >> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto"
> >> <lorenzetto.l...@gmail.com> wrote:
> >>
> >> Hello,
> >>
> >> i'm starting the implementation of our disaster recovery site with RHV
> >> 4.1.latest for our production environment.
> >>
> >> Our production setup is very easy, with self hosted engine on dc
> >> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
> >> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
> >> and EMC VNX8000. Both storage arrays supports replication via their
> >> own replication protocols (SRDF, MirrorView), so we'd like to delegate
> >> to them the replication of data to the remote site, which is located
> >> on another remote datacenter.
> >>
> >> In KVMPD DC we have some storage domains that contains non critical
> >> VMs, which we don't want to replicate to remote site (in case of
> >> failure they have a low priority and will be restored from a backup).
> >> In our setup we won't replicate them, so will be not available for
> >> attachment on remote site. Can be this be an issue? Do we require to
> >> replicate everything?
> >
> >
> > No, it is not required to replicate everything.
> > If there are no disks on those storage domains that attached to your
> > critical VMs/Templates you don't have to use them as part of yout mapping
> > var file
> >
>
> Excellent.
>
> >>
> >> What about master domain? Do i require that the master storage domain
> >> stays on a replicated volume or can be any of the available ones?
> >
> >
> >
> > You can choose which storage domains you want to recover.
> > Basically, if a storage domain is indicated as "master" in the mapping
> var
> > file then it should be attached first to the Data Center.
> > If your secondary setup already contains a master storage domain which
> you
> > dont care to replicate and recover, then you can configure your mapping
> var
> > file to only attach regular storage domains, simply indicate
> > "dr_master_domain: False" in the dr_import_storages for all the storage
> > domains. (You can contact me on IRC if you need some guidance with it)
> >
>
> Good,
>
> that's my case. I don't need a new master domain on remote side,
> because is an already up and running setup where i want to attach
> replicated storage and run the critical VMs.
>
>
>
> >>
> >>
> >> I've seen that since 4.1 there's an API for updating OVF_STORE disks.
> >> Do we require to invoke it with a frequency that is the compatible
> >> with the replication frequency on storage side.
> >
> >
> >
> > No, you don't have to use the update OVF_STORE disk for replication.
> > The OVF_STORE disk is being updated every 60 minutes (The default
> > configuration value),
> >
>
> What i need is that informations about vms is replicated to the remote
> site with disk.
> In an older test i had the issue that disks were replicated to remote
> site, but vm configuration not!
> I've found disks in the "Disk"  tab of storage domain, but nothing on VM
> Import.
>


Can you reproduce it and attach the logs of the setup before the disaster
and after the recovery?
That could happen in case of new created VMs and Templates which were not
yet updated in the OVF_STORE disk, since the OVF_STORE update process was
not running yet before the disaster.
Since the time of a disaster can't be anticipated, gaps like this might
happen.


>
> >>
> >> We set at the moment
> >> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
> >> updated with the required frequency?
> >
> >
> >
> > OVF_STORE disk is being updated every 60 minutes but keep in mind that
> the
> > OVF_STORE is being updated internally in the engine so it might not be
> > synced with the RPO which you configured.
> > If I understood correctly, then you are right by indicat

Re: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated

2018-02-05 Thread Maor Lipchuk
Hi Luca,

Thank you for your interst in the Disaster Recovery ansible solution, it is
great to see users get familiar with it.
Please see my comments inline

Regards,
Maor

On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" <
> lorenzetto.l...@gmail.com> wrote:
>
> Hello,
>
> i'm starting the implementation of our disaster recovery site with RHV
> 4.1.latest for our production environment.
>
> Our production setup is very easy, with self hosted engine on dc
> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our
> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA
> and EMC VNX8000. Both storage arrays supports replication via their
> own replication protocols (SRDF, MirrorView), so we'd like to delegate
> to them the replication of data to the remote site, which is located
> on another remote datacenter.
>
> In KVMPD DC we have some storage domains that contains non critical
> VMs, which we don't want to replicate to remote site (in case of
> failure they have a low priority and will be restored from a backup).
> In our setup we won't replicate them, so will be not available for
> attachment on remote site. Can be this be an issue? Do we require to
> replicate everything?
>
>
No, it is not required to replicate everything.
If there are no disks on those storage domains that attached to your
critical VMs/Templates you don't have to use them as part of yout mapping
var file


> What about master domain? Do i require that the master storage domain
> stays on a replicated volume or can be any of the available ones?
>
>

You can choose which storage domains you want to recover.
Basically, if a storage domain is indicated as "master" in the mapping var
file then it should be attached first to the Data Center.
If your secondary setup already contains a master storage domain which you
dont care to replicate and recover, then you can configure your mapping var
file to only attach regular storage domains, simply indicate
"dr_master_domain: False" in the dr_import_storages for all the storage
domains. (You can contact me on IRC if you need some guidance with it)


>
> I've seen that since 4.1 there's an API for updating OVF_STORE disks.
> Do we require to invoke it with a frequency that is the compatible
> with the replication frequency on storage side.
>
>

No, you don't have to use the update OVF_STORE disk for replication.
The OVF_STORE disk is being updated every 60 minutes (The default
configuration value),


> We set at the moment
> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets
> updated with the required frequency?
>
>

OVF_STORE disk is being updated every 60 minutes but keep in mind that the
OVF_STORE is being updated internally in the engine so it might not be
synced with the RPO which you configured.
If I understood correctly, then you are right by indicating that the data
of the storage domain will be synced at approximatly 2 hours = RPO of 1hr +
 OVF_STORE update of 1hr


>
> I've seen a recent presentation by Maor Lipchuk that is showing the
> "automagic" ansible role for disaster recovery:
>
> https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible
>
> It's also related with some youtube presentations demonstrating a real
> DR plan execution.
>
> But what i've seen is that Maor is explicitly talking about 4.2
> release. Does that role works only with >4.2 releases or can be used
> also on earlier (4.1) versions?
>
>
> Releases before 4.2 do not store complete information on the OVF store to
> perform such comprehensive failover. I warmly suggest 4.2!
> Y.
>

Indeed,
We also introduced several functionalities like detach of master storage
domain , and attach of "dirty" master storage domain which are depndant on
the failover process, so unfortunatly to support a full recovery process
you will need oVirt 4.2 env.


>
> I've tested a manual flow of replication + recovery through Import SD
> followed by Import VM and worked like a charm. Using a prebuilt
> ansible role will reduce my effort on creating a new automation for
> doing this.
>
> Anyone has experiences like mine?
>
> Thank you for the help you may provide, i'd like to contribute back to
> you with all my findings and with an usable tool (also integrated with
> storage arrays if possible).
>
>
Please feel free to share your comments and questions, I would very
appreciate to know your user expirience.


>
> Luca
>
> (Sorry for duplicate email, ctrl-enter happened before mail completion)
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebb

Re: [ovirt-users] Error "Did not connect host to storage domain because connection for connectionId is null" in ovirt 4.1

2017-12-13 Thread Maor Lipchuk
We need the logs to check what has happened.
Can you please open a bug on that and attach the engine and vdsm logs

Regards,
Maor

On Tue, Dec 12, 2017 at 3:49 PM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Dec 11, 2017 at 10:29 PM, Claude Durocher <
> claude.duroc...@cptaq.gouv.qc.ca> wrote:
>
>> I have a 4.1 ovirt environment. I cannot reactivate a storage domain
>> (data-master) and I get an error message stating "connection for
>> connectionId 'b3011e5b-552e-4393-a758-ac1e35648ab1' is null". I also
>> cannot delete this storage domain as it's a master domain.
>>
>> 2017-12-11 15:25:10,971-05 INFO  [org.ovirt.engine.core.bll.sto
>> rage.domain.ActivateStorageDomainCommand] (default task-22)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[5662588b-81d2-4da9-b942-8918004770fe=STORAGE]',
>> sharedLocks=''}'
>> 2017-12-11 15:25:10,999-05 INFO  [org.ovirt.engine.core.bll.sto
>> rage.domain.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-50)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Running command:
>> ActivateStorageDomainCommand internal: false. Entities affected :  ID:
>> 5662588b-81d2-4da9-b942-8918004770fe Type: StorageAction group
>> MANIPULATE_STORAGE_DOMAIN with role type ADMIN
>> 2017-12-11 15:25:11,002-05 INFO  [org.ovirt.engine.core.bll.sto
>> rage.domain.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-50)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Lock freed to object
>> 'EngineLock:{exclusiveLocks='[5662588b-81d2-4da9-b942-8918004770fe=STORAGE]',
>> sharedLocks=''}'
>> 2017-12-11 15:25:11,002-05 INFO  [org.ovirt.engine.core.bll.sto
>> rage.domain.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-50)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] ActivateStorage Domain. Before
>> Connect all hosts to pool. Time: Mon Dec 11 15:25:11 EST 2017
>> 2017-12-11 15:25:11,015-05 WARN  [org.ovirt.engine.core.bll.sto
>> rage.connection.BaseFsStorageHelper] (org.ovirt.thread.pool-6-thread-43)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Did not connect host
>> 'e6dbdeb8-4e0e-4589-9121-6d3408c6d7b0' to storage domain
>> 'ovirt-lg-1-lun1' because connection for connectionId
>> 'b3011e5b-552e-4393-a758-ac1e35648ab1' is null.
>> 2017-12-11 15:25:11,016-05 ERROR [org.ovirt.engine.core.bll.sto
>> rage.domain.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-50)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Cannot connect storage server,
>> aborting Storage Domain activation.
>> 2017-12-11 15:25:11,017-05 INFO  [org.ovirt.engine.core.bll.sto
>> rage.domain.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-50)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] Command
>> [id=0fb1fa9d-4002-4fe0-9af2-d30470d5f146]: Compensating
>> CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.b
>> usinessentities.StoragePoolIsoMap; snapshot:
>> EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId
>> ='a0cc9e2c-6bff-4d5b-803c-0cd62292c269', 
>> storageId='5662588b-81d2-4da9-b942-8918004770fe'}',
>> status='Maintenance'}.
>> 2017-12-11 15:25:11,023-05 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-50)
>> [49dfce30-0e2c-43f8-942d-67a2b56b3ef8] EVENT_ID:
>> USER_ACTIVATE_STORAGE_DOMAIN_FAILED(967), Correlation ID:
>> 49dfce30-0e2c-43f8-942d-67a2b56b3ef8, Job ID:
>> 2d766e2a-4b35-4822-9580-4eb9df2d9c33, Call Stack: null, Custom ID: null,
>> Custom Event ID: -1, Message: Failed to activate Storage Domain
>> ovirt-lg-1-lun1 (Data Center ovirt-lg-1) by admin@internal-authz
>>
>>
>> select id,storage,storage_name,_update_date from storage_domain_static;
>>   id  |
>> storage |  storage_name  |
>> _update_date
>> --+-
>> ---++---
>> 
>>  072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af
>> | ovirt-image-repository |
>>  b66443f4-d949-4fca-9825-8cb98eae9e14 | e462883f-0be1-4b31-acf1-154dd7accbc1
>> | export |
>>  45807651-8f10-408f-891b-64f1cc577a64 | 452235ea-d8b1-43d1-920f-a16a032e000b
>> | ovirt-lg-1-iso | 2017-01-20 20:01:00.284097+00
>>  a1d803b9-fd0b-477f-a874-53d050d4347b | cd8939df-9cdd-49d3-8dc5-3c06d1836399
>> | ovirt-lg-1-export  | 2017-01-20 20:01:14.70549+00
>>  b67e7442-f032-4b5c-a4fe-10422650a90b | c6272852-3586-4584-8496-d97c4370c798
>> | ovirt-5-iso| 2017-01-20 20:02:08.668034+00
>>  a561589c-8eb8-4823-9615-92ac4a1ea94e | cacb8801-6826-42cb-9bab-5552175e0329
>> | ovirt-lg-2-export  |
>>  646b331a-b68e-4894-807f-bdc8adae15c9 | a592b780-7bd6-4599-ab7e-12c43cb9279d
>> | ovirt-lg-2-iso |
>>  3ba53a0b-30be-497c-b4df-880e6c6f7567 | 8e5236c9-cbc0-4256-abff-7e984abef65a
>> | master | 2017-12-11 02:12:16.499288+00
>>  3efa5ed5-4e17-4daf-9cc0-6819a9cd7aae | 

Re: [ovirt-users] Move disk between domains

2017-12-11 Thread Maor Lipchuk
On Mon, Dec 11, 2017 at 6:30 PM, Matthew DeBoer <matthewdeboe...@gmail.com>
wrote:

> The permissions are all ok. vdsm kvm.
>
> This shows up in vdsm.log when the snapshot is tried.
>
>
> 2017-12-08 13:03:13,031-0600 ERROR (jsonrpc/7) [virt.vm]
> (vmId='6a53d8a9-3b4d-4995-8b84-dc920badf0fc') Unable to take snapshot
> (vm:3699)
> Traceback (most recent call last):
>  File "/usr/share/vdsm/virt/vm.py", line 3696, in snapshot
>self._dom.snapshotCreateXML(snapxml, snapFlags)
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69,
> in f
>ret = attr(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
> 123, in wrapper
>ret = f(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1006, in
> wrapper
>return func(inst, *args, **kwargs)
>  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2506, in
> snapshotCreateXML
>if ret is None:raise libvirtError('virDomainSnapshotCreateXML()
> failed', dom=self)
> libvirtError: internal error: unable to execute QEMU command
> 'transaction': Could not read L1 table: Input/output error
>


This looks like a qcow issue, I would try to send this to the qemu discuss
list at https://lists.nongnu.org/mailman/listinfo/qemu-discuss
Are you using Gluster? I encountered a discussion there which has a similar
error although it was related to Gluster:
  https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg04742.html


>
>
> On Sun, Dec 10, 2017 at 6:34 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>
>> On Fri, Dec 8, 2017 at 8:01 PM, Matthew DeBoer <matthewdeboe...@gmail.com
>> > wrote:
>>
>>> When i try to move a specific disk between storage domains i get an
>>> error.
>>>
>>> 2017-12-08 11:26:05,257-06 ERROR [org.ovirt.engine.ui.frontend.
>>> server.gwt.OvirtRemoteLoggingService] (default task-41) [] Permutation
>>> name: 8C01181C3B121D0AAE1312275CC96415
>>> 2017-12-08 11:26:05,257-06 ERROR [org.ovirt.engine.ui.frontend.
>>> server.gwt.OvirtRemoteLoggingService] (default task-41) [] Uncaught
>>> exception: com.google.gwt.core.client.JavaScriptException: (TypeError)
>>> __gwt$exception: : Cannot read property 'F' of null
>>>at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocati
>>> onModel$3.$onSuccess(DisksAllocationModel.java:120)
>>>at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocati
>>> onModel$3.onSuccess(DisksAllocationModel.java:120)
>>>at 
>>> org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233)
>>> [frontend.jar:]
>>>at 
>>> org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233)
>>> [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.OperationProcesso
>>> r$2.$onSuccess(OperationProcessor.java:139) [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.OperationProcesso
>>> r$2.onSuccess(OperationProcessor.java:139) [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicati
>>> onProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269)
>>> [frontend.jar:]
>>>at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicati
>>> onProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269)
>>> [frontend.jar:]
>>>at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.o
>>> nResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
>>>at 
>>> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237)
>>> [gwt-servlet.jar:]
>>>at 
>>> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
>>> [gwt-servlet.jar:]
>>>at Unknown.eval(webadmin-0.js@65)
>>>at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296)
>>> [gwt-servlet.jar:]
>>>at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335)
>>> [gwt-servlet.jar:]
>>>at Unknown.eval(webadmin-0.js@54)
>>>
>>> All the other disks i can move.
>>>
>>> The issue here is how i got this storage domain into ovirt i think.
>>>
>>> I set up a new cluster using 4.1 coming from 3.6.
>>>
>>> I imported a domain from the 3.6 cluster. I am trying to move this disk
>>> to one of the new storage domains on the 4.1 cluster.
>>>
>>
>>>
>>> Any help would be greatly appreciated
>>>
>>
>>
>> I would try to check the user permissions on that storage domain or the
>> disk
>>
>> Regards,
>> Maor
>>
>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move disk between domains

2017-12-10 Thread Maor Lipchuk
On Fri, Dec 8, 2017 at 8:01 PM, Matthew DeBoer 
wrote:

> When i try to move a specific disk between storage domains i get an error.
>
> 2017-12-08 11:26:05,257-06 ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-41) [] Permutation name: 8C01181C3B121D0AAE1312275CC96415
> 2017-12-08 11:26:05,257-06 ERROR 
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-41) [] Uncaught exception: 
> com.google.gwt.core.client.JavaScriptException:
> (TypeError)
> __gwt$exception: : Cannot read property 'F' of null
>at org.ovirt.engine.ui.uicommonweb.models.storage.
> DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120)
>at org.ovirt.engine.ui.uicommonweb.models.storage.
> DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120)
>at 
> org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233)
> [frontend.jar:]
>at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233)
> [frontend.jar:]
>at org.ovirt.engine.ui.frontend.communication.
> OperationProcessor$2.$onSuccess(OperationProcessor.java:139)
> [frontend.jar:]
>at org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.
> onSuccess(OperationProcessor.java:139) [frontend.jar:]
>at org.ovirt.engine.ui.frontend.communication.
> GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269)
> [frontend.jar:]
>at org.ovirt.engine.ui.frontend.communication.
> GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269)
> [frontend.jar:]
>at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.
> onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
>at 
> com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237)
> [gwt-servlet.jar:]
>at 
> com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
> [gwt-servlet.jar:]
>at Unknown.eval(webadmin-0.js@65)
>at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296)
> [gwt-servlet.jar:]
>at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335)
> [gwt-servlet.jar:]
>at Unknown.eval(webadmin-0.js@54)
>
> All the other disks i can move.
>
> The issue here is how i got this storage domain into ovirt i think.
>
> I set up a new cluster using 4.1 coming from 3.6.
>
> I imported a domain from the 3.6 cluster. I am trying to move this disk to
> one of the new storage domains on the 4.1 cluster.
>

>
> Any help would be greatly appreciated
>


I would try to check the user permissions on that storage domain or the disk

Regards,
Maor


>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practice for iSCSI storage domains

2017-12-07 Thread Maor Lipchuk
On Thu, Dec 7, 2017 at 9:10 AM, Richard Chan 
wrote:

> What is the best practice for iSCSI storage domains:
>
> Many small targets vs a few large targets?
>
> Specific example: if you wanted a 8TB storage domain would you prepare a
> single 8TB LUN or (for example) 8 x 1 TB LUNs.
>

There could be many reasons to use each type.
>From the top of my head I think that configuration wise it will be better
to configure more than one lun.
That can be helpful if you plan to use external LUN disks for example.

Multiple targets might also come useful if you plan to configure iSCSI
multipath in the future, that way you can choose only part of the targets
to apply the MPIO on.


>
>
>
>
> --
> Richard Chan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Scheduling daily Snapshot

2017-12-06 Thread Maor Lipchuk
On Wed, Dec 6, 2017 at 6:01 PM, Jason Lelievre 
wrote:

> Hello,
>
> What is the best way to set up a daily live snapshot for all VM, and have
> the possibility to recover, for example, a specific VM to a specific day?
>
> I use a Hyperconverged Infrastructure with 3 nodes, gluster storage.
>
> Thank you,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
One idea is to use crontab to run a daily script which will use the
engine-sdk to grep all VMs and for each one, create a snapshot.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-06 Thread Maor Lipchuk
On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot 
wrote:

> Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :
>
>> Hi all,
>>
>> I'm about to lose one very important vm. I shut down this vm for
>> maintenance and then I moved the four disks to a new created lun. This vm
>> has 2 snapshots.
>>
>> After successful move, the vm refuses to start with this message:
>>
>> Bad volume specification {u'index': 0, u'domainID':
>> u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format':
>> u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID':
>> u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648
>> <(214)%20748-3648>', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>> u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional':
>> u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>> 'truesize': '2147483648 <(214)%20748-3648>', u'poolID':
>> u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared':
>> u'false', u'propagateErrors': u'off', u'type': u'disk'}.
>>
>> I tried to merge the snaphots, export , clone from snapshot, copy disks,
>> or deactivate disks and every action fails when it is about disk.
>>
>> I began to dd lv group to get a new vm intended to a standalone
>> libvirt/kvm, the vm quite boots up but it is an outdated version before the
>> first snapshot. There is a lot of disks when doing a "lvs | grep 961ea94a"
>> supposed to be disks snapshots. Which of them must I choose to get the last
>> vm before shutting down? I'm not used to deal snapshot with virsh/libvirt,
>> so some help will be much appreciated.
>>
>
The disks which you want to copy should contain the entire volume chain.
Based on the log you mentioned, It looks like this image is problematic:

  storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
  imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
  volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'

What if you try to deactivate this image and try to run the VM, will it run?




>
>> Is there some unknown command to recover this vm into ovirt?
>>
>> Thank you in advance.
>>
>>
>>




>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Beside specific oVirt answers, did you try to get informations about the
> snapshot tree with qemu-img info --backing-chain on the adequate /dev/...
> logical volume?
> As you know how to dd from LVs, you could extract every needed snapshots
> files and rebuild your VM outside of oVirt.
> Then take time to re-import it later and safely.
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] deprecating export domain?

2017-10-15 Thread Maor Lipchuk
On Sun, Oct 15, 2017 at 8:17 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>
> On Sun, Oct 15, 2017 at 7:13 PM, Nir Soffer <nsof...@redhat.com> wrote:
>>
>> On Sun, Oct 1, 2017 at 3:32 PM Maor Lipchuk <mlipc...@redhat.com> wrote:
>>>
>>> On Sun, Oct 1, 2017 at 2:50 PM, Nir Soffer <nsof...@redhat.com> wrote:
>>> > On Sun, Oct 1, 2017 at 9:58 AM Yaniv Kaul <yk...@redhat.com> wrote:
>>> >>
>>> >> On Sat, Sep 30, 2017 at 8:41 PM, Charles Kozler <ckozler...@gmail.com>
>>> >> wrote:
>>> >>>
>>> >>> Hello,
>>> >>>
>>> >>> I recently read on this list from a redhat member that export domain
>>> >>> is
>>> >>> either being deprecated or looking at being deprecated
>>> >
>>> >
>>> > We want to deprecate the export domain, but it is not deprecated yet.
>>> >
>>> >>>
>>> >>>
>>> >>> To that end, can you share details? Can you share any
>>> >>> notes/postings/bz's
>>> >>> that document this? I would imagine something like this would be
>>> >>> discussed
>>> >>> in larger audience
>>> >
>>> >
>>> > I agree.
>>> >
>>> >>>
>>> >>>
>>> >>> This seems like a somewhat significant change to make and I am
>>> >>> curious
>>> >>> where this is scheduled? Currently, a lot of my backups rely
>>> >>> explicitly on
>>> >>> an export domain for online snapshots, so I'd like to plan
>>> >>> accordingly
>>> >
>>> >
>>> > Can you describe how you backup your vms using export domain?
>>> > What do you mean by online snapshots?
>>> >
>>> >>
>>> >>
>>> >> We believe that the ability to detach and attach a data domain
>>> >> provides
>>> >> equivalent and even superior functionality to the export domain. Is
>>> >> there
>>> >> anything you'd miss? I don't believe it would be a significant change.
>>> >
>>> >
>>> > Attaching and detaching data domain was not designed for backing up
>>> > vms.
>>> > How would you use it for backup?
>>> >
>>> > How do you ensure that a backup clone of a vm is not started by
>>> > mistake,
>>> > changing the backup contents?
>>>
>>> That is a good question.
>>> We recently introduced a new feature called "backup storage domain"
>>> which you can mark the storage domain as backup storage domain.
>>> That can guarantee that no VMs will run with disks/leases reside on
>>> the storage domain.
>>
>>
>> How older systems will handle the backup domain, when they do not
>> know about backup domains?


What do you mean?
backup storage domain is a DB configuration used in the engine, it
does not depend on VDSM or has any indication of backup in its
metadata.

>>
>>>
>>> The feature should already exist in oVirt 4.2 (despite a bug that
>>> should be handled with this patch https://gerrit.ovirt.org/#/c/81290/)
>>> You can find more information on this here:
>>>
>>> https://github.com/shubham0d/ovirt-site/blob/41dcb0f1791d90d1ae0ac43cd34a399cfedf54d8/source/develop/release-management/features/storage/backup-storage-domain.html.md
>>>
>>> Basically the OVF that is being saved in the export domain should be
>>> similar to the same one that is being saved in the OVF_STORE disk in
>>> the storage domain.
>>
>>
>> There is no guarantee that the OVF_STORE will contain the vm xml after
>> the domain is detached.
>
>
> I believe we need to ensure that before detach, OVF store update succeeds
> and fail to detach otherwise. We may wish to have a 'force detach' to detach
> even if OVF store update fails for some reason.
> Y.
>


Why detach and not when moving the storage domain to maintenance?
We do have this mechanism today when moving a storage domain to
maintenance although IINM the operation does not fail if the OVF
update fails, but that can be easily fixed.

>>
>>>
>>> If the user manages replication on that storage domain it can be
>>> re-used for backup purposes by importing it to a setup.
>>> Actually it is much more efficient to use a data storage domain than
>>> to use the copy operation

Re: [ovirt-users] deprecating export domain?

2017-10-01 Thread Maor Lipchuk
On Sun, Oct 1, 2017 at 2:50 PM, Nir Soffer  wrote:
> On Sun, Oct 1, 2017 at 9:58 AM Yaniv Kaul  wrote:
>>
>> On Sat, Sep 30, 2017 at 8:41 PM, Charles Kozler 
>> wrote:
>>>
>>> Hello,
>>>
>>> I recently read on this list from a redhat member that export domain is
>>> either being deprecated or looking at being deprecated
>
>
> We want to deprecate the export domain, but it is not deprecated yet.
>
>>>
>>>
>>> To that end, can you share details? Can you share any notes/postings/bz's
>>> that document this? I would imagine something like this would be discussed
>>> in larger audience
>
>
> I agree.
>
>>>
>>>
>>> This seems like a somewhat significant change to make and I am curious
>>> where this is scheduled? Currently, a lot of my backups rely explicitly on
>>> an export domain for online snapshots, so I'd like to plan accordingly
>
>
> Can you describe how you backup your vms using export domain?
> What do you mean by online snapshots?
>
>>
>>
>> We believe that the ability to detach and attach a data domain provides
>> equivalent and even superior functionality to the export domain. Is there
>> anything you'd miss? I don't believe it would be a significant change.
>
>
> Attaching and detaching data domain was not designed for backing up vms.
> How would you use it for backup?
>
> How do you ensure that a backup clone of a vm is not started by mistake,
> changing the backup contents?

That is a good question.
We recently introduced a new feature called "backup storage domain"
which you can mark the storage domain as backup storage domain.
That can guarantee that no VMs will run with disks/leases reside on
the storage domain.
The feature should already exist in oVirt 4.2 (despite a bug that
should be handled with this patch https://gerrit.ovirt.org/#/c/81290/)
You can find more information on this here:
  
https://github.com/shubham0d/ovirt-site/blob/41dcb0f1791d90d1ae0ac43cd34a399cfedf54d8/source/develop/release-management/features/storage/backup-storage-domain.html.md

Basically the OVF that is being saved in the export domain should be
similar to the same one that is being saved in the OVF_STORE disk in
the storage domain.
If the user manages replication on that storage domain it can be
re-used for backup purposes by importing it to a setup.
Actually it is much more efficient to use a data storage domain than
to use the copy operation to/from the export storage domain.

>
> Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] USER_CREATE_SNAPSHOT_FINISHED_FAILURE with Cinder storage stuck

2017-08-23 Thread Maor Lipchuk
On Tue, Aug 22, 2017 at 1:08 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at> wrote:
>
>
> Am 2017-08-22 um 09:33 schrieb Maor Lipchuk:
>>
>> On Mon, Aug 21, 2017 at 6:12 PM, Matthias Leopold
>> <matthias.leop...@meduniwien.ac.at> wrote:
>>>
>>> Hi,
>>>
>>> we're experimenting with Cinder/Ceph Storage on oVirt 4.1.3. When we
>>> tried
>>> to snapshot a VM (2 disks on Cinder storage domain) the task never
>>> finished
>>> and now seems to be in an uninterruptible loop. We tried to stop it in
>>> various (brute force) ways, but the below messages (one of the disks as
>>> an
>>> example) are cluttering engine.log every 10 seconds. We tried the
>>> following:
>>>
>>> - deleting the VM
>>> - restarting ovirt-engine service
>>> - vdsClient -s 0 getAllTasksStatuses on SPM host (no result)
>>> - restarting vdsmd service on SPM host
>>> - /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u engine -d
>>> engine
>>> -c c841c979-70ea-4e06-b9c4-9c5ce014d76d
>>>
>>> None of this helped. How do we get rid of this failed transaction?
>>>
>>> thx
>>> matthias
>>>
>>> 2017-08-21 16:40:44,798+02 INFO
>>> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d]
>>> transaction
>>> rolled back
>>> 2017-08-21 16:40:44,799+02 ERROR
>>> [org.ovirt.engine.core.bll.job.ExecutionHandler]
>>> (DefaultQuartzScheduler7)
>>> [080af640-bac3-4990-8bf4-6829551b538d] Exception:
>>> org.springframework.dao.DataIntegrityViolationException:
>>> CallableStatementCallback; SQL [{call insertstep(?, ?, ?, ?, ?, ?, ?, ?,
>>> ?,
>>> ?, ?, ?, ?, ?)}]; ERROR: insert or update on table "step" violates
>>> foreign
>>> key constraint "fk_step_job"
>>> 2017-08-21 16:40:44,805+02 ERROR
>>> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Ending
>>> command
>>> 'org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand'
>>> with
>>> failure.
>>> 2017-08-21 16:40:44,807+02 WARN
>>> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] No
>>> snapshot
>>> was created for VM 'c0235316-81c4-48be-9521-b86b338c7d20' which is in
>>> LOCKED
>>> status
>>> 2017-08-21 16:40:44,810+02 INFO
>>> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d]
>>> transaction
>>> rolled back
>>> 2017-08-21 16:40:44,810+02 WARN
>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Trying
>>> to
>>> release exclusive lock which does not exist, lock key:
>>> 'c0235316-81c4-48be-9521-b86b338c7d20VM'
>>> 2017-08-21 16:40:44,810+02 INFO
>>> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Lock
>>> freed
>>> to object
>>> 'EngineLock:{exclusiveLocks='[c0235316-81c4-48be-9521-b86b338c7d20=VM]',
>>> sharedLocks=''}'
>>> 2017-08-21 16:40:44,829+02 ERROR
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d]
>>> EVENT_ID:
>>> USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID:
>>> 080af640-bac3-4990-8bf4-6829551b538d, Job ID:
>>> a3be8af1-8d33-4d35-9672-215ac7c9959f, Call Stack: null, Custom Event ID:
>>> -1,
>>> Message: Failed to complete snapshot 'test' creation for VM ''.
>>> 2017-08-21 16:40:44,829+02 ERROR
>>> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
>>> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Failed
>>> invoking callback end method 'onFailed' for command
>>> 'c841c979-70ea-4e06-b9c4-9c5ce014d76d' with exception 'null', the
>>> callback
>>> is marked for end method retries
>>>
>>>
>
>>
>>
>> Hi Matthias,
>>
>> Can you please attach the full engine log contains the first error
>> occurred so we can trace its or

Re: [ovirt-users] USER_CREATE_SNAPSHOT_FINISHED_FAILURE with Cinder storage stuck

2017-08-22 Thread Maor Lipchuk
On Mon, Aug 21, 2017 at 6:12 PM, Matthias Leopold
 wrote:
> Hi,
>
> we're experimenting with Cinder/Ceph Storage on oVirt 4.1.3. When we tried
> to snapshot a VM (2 disks on Cinder storage domain) the task never finished
> and now seems to be in an uninterruptible loop. We tried to stop it in
> various (brute force) ways, but the below messages (one of the disks as an
> example) are cluttering engine.log every 10 seconds. We tried the following:
>
> - deleting the VM
> - restarting ovirt-engine service
> - vdsClient -s 0 getAllTasksStatuses on SPM host (no result)
> - restarting vdsmd service on SPM host
> - /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u engine -d engine
> -c c841c979-70ea-4e06-b9c4-9c5ce014d76d
>
> None of this helped. How do we get rid of this failed transaction?
>
> thx
> matthias
>
> 2017-08-21 16:40:44,798+02 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] transaction
> rolled back
> 2017-08-21 16:40:44,799+02 ERROR
> [org.ovirt.engine.core.bll.job.ExecutionHandler] (DefaultQuartzScheduler7)
> [080af640-bac3-4990-8bf4-6829551b538d] Exception:
> org.springframework.dao.DataIntegrityViolationException:
> CallableStatementCallback; SQL [{call insertstep(?, ?, ?, ?, ?, ?, ?, ?, ?,
> ?, ?, ?, ?, ?)}]; ERROR: insert or update on table "step" violates foreign
> key constraint "fk_step_job"
> 2017-08-21 16:40:44,805+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Ending
> command
> 'org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand' with
> failure.
> 2017-08-21 16:40:44,807+02 WARN
> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] No snapshot
> was created for VM 'c0235316-81c4-48be-9521-b86b338c7d20' which is in LOCKED
> status
> 2017-08-21 16:40:44,810+02 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] transaction
> rolled back
> 2017-08-21 16:40:44,810+02 WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Trying to
> release exclusive lock which does not exist, lock key:
> 'c0235316-81c4-48be-9521-b86b338c7d20VM'
> 2017-08-21 16:40:44,810+02 INFO
> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Lock freed
> to object
> 'EngineLock:{exclusiveLocks='[c0235316-81c4-48be-9521-b86b338c7d20=VM]',
> sharedLocks=''}'
> 2017-08-21 16:40:44,829+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] EVENT_ID:
> USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID:
> 080af640-bac3-4990-8bf4-6829551b538d, Job ID:
> a3be8af1-8d33-4d35-9672-215ac7c9959f, Call Stack: null, Custom Event ID: -1,
> Message: Failed to complete snapshot 'test' creation for VM ''.
> 2017-08-21 16:40:44,829+02 ERROR
> [org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
> (DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Failed
> invoking callback end method 'onFailed' for command
> 'c841c979-70ea-4e06-b9c4-9c5ce014d76d' with exception 'null', the callback
> is marked for end method retries
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


Hi Matthias,

Can you please attach the full engine log contains the first error
occurred so we can trace its origin and fix it?
Does it reproduced constantly?

The engine does not use VDSM tasks to manage Cinder, the engine use
Cinder as an external provider using the COCO infrastructure for async
tasks.
The COCO tasks are managed in the database using the command_entities
table, basically if you will remove all references of the command id
from the command_entities and restart engine you should not see it any
more.

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] sanlock ids file broken after server crash

2017-07-30 Thread Maor Lipchuk
On Sun, Jul 30, 2017 at 4:24 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> Hi David,
Sorry, I meant Johan

>
> I'm not sure how it got to that character in the first place.
> Nir, Is there a safe way to fix that while there are running VMs?
>
> Regards,
> Maor
>
> On Sun, Jul 30, 2017 at 11:58 AM, Johan Bernhardsson <jo...@kafit.se> wrote:
>> (First reply did not get to the list)
>>
>> From sanlock.log:
>>
>> 2017-07-30 10:49:31+0200 1766275 [1171]: s310751 lockspace 0924ff77-
>> ef51-435b-b90d-50bfbf2e8de7:1:/rhev/data-
>> center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-435b-b90d-
>> 50bfbf2e8de7/dom_md/ids:0
>> 2017-07-30 10:49:31+0200 1766275 [10496]: verify_leader 1 wrong space
>> name 0924ff77-ef51-435b-b90d-50bfbf2eke7 0924ff77-ef51-435b-b90d-
>> 50bfbf2e8de7 /rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-
>> ef51-435b-b90d-50bfbf2e8de7/dom_md/ids
>> 2017-07-30 10:49:31+0200 1766275 [10496]: leader1 delta_acquire_begin
>> error -226 lockspace 0924ff77-ef51-435b-b90d-50bfbf2e8de7 host_id 1
>> 2017-07-30 10:49:31+0200 1766275 [10496]: leader2 path /rhev/data-
>> center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-435b-b90d-
>> 50bfbf2e8de7/dom_md/ids offset 0
>> 2017-07-30 10:49:31+0200 1766275 [10496]: leader3 m 12212010 v 30003 ss
>> 512 nh 0 mh 4076 oi 1 og 2031079063 lv 0
>> 2017-07-30 10:49:31+0200 1766275 [10496]: leader4 sn 0924ff77-ef51-
>> 435b-b90d-50bfbf2eke7 rn <93>7^\afa5-3a91-415b-a04c-
>> 221d3e060163.vbgkvm01.a ts 4351980 cs eefa4dd7
>> 2017-07-30 10:49:32+0200 1766276 [1171]: s310751 add_lockspace fail
>> result -226
>>
>>
>> vdsm logs doesnt have any errors and engine.log does not have any
>> errors.
>>
>> And if i check the ids file manually. I can see that everything in it
>> is correct except for the first host in the cluster where the space
>> name and host id is broken.
>>
>>
>> /Johan
>>
>> On Sun, 2017-07-30 at 11:18 +0300, Maor Lipchuk wrote:
>>> Hi Johan,
>>>
>>> Can you please share the vdsm and engine logs.
>>>
>>> Also, it won't harm to also get the sanlock logs just in case sanlock
>>> was configured to save all debugging in a log file (see
>>> http://people.redhat.com/teigland/sanlock-messages.txt)).
>>> Try to share the sanlock ouput by running  'sanlock client status',
>>> 'sanlock client log_dump'.
>>>
>>> Regards,
>>> Maor
>>>
>>> On Thu, Jul 27, 2017 at 6:18 PM, Johan Bernhardsson <jo...@kafit.se>
>>> wrote:
>>> >
>>> > Hello,
>>> >
>>> > The ids file for sanlock is broken on one setup. The first host id
>>> > in
>>> > the file is wrong.
>>> >
>>> > From the logfile i have:
>>> >
>>> > verify_leader 1 wrong space name 0924ff77-ef51-435b-b90d-
>>> > 50bfbf2e�ke7
>>> > 0924ff77-ef51-435b-b90d-50bfbf2e8de7 /rhev/data-
>>> > center/mnt/glusterSD/
>>> >
>>> >
>>> >
>>> > Note the broken char in the space name.
>>> >
>>> > This also apears. And it seams as the hostid too is broken in the
>>> > ids
>>> > file:
>>> >
>>> > leader4 sn 0924ff77-ef51-435b-b90d-50bfbf2e�ke7 rn ��7 afa5-3a91-
>>> > 415b-
>>> > a04c-221d3e060163.vbgkvm01.a ts 4351980 cs eefa4dd7
>>> >
>>> > Note the broken chars there as well.
>>> >
>>> > If i check the ids file with less or strings the first row where my
>>> > vbgkvm01 host are. That has broken chars.
>>> >
>>> > Can this be repaired in some way without taking down all the
>>> > virtual
>>> > machines on that storage?
>>> >
>>> >
>>> > /Johan
>>> > ___
>>> > Users mailing list
>>> > Users@ovirt.org
>>> > http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] sanlock ids file broken after server crash

2017-07-30 Thread Maor Lipchuk
Hi David,

I'm not sure how it got to that character in the first place.
Nir, Is there a safe way to fix that while there are running VMs?

Regards,
Maor

On Sun, Jul 30, 2017 at 11:58 AM, Johan Bernhardsson <jo...@kafit.se> wrote:
> (First reply did not get to the list)
>
> From sanlock.log:
>
> 2017-07-30 10:49:31+0200 1766275 [1171]: s310751 lockspace 0924ff77-
> ef51-435b-b90d-50bfbf2e8de7:1:/rhev/data-
> center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-435b-b90d-
> 50bfbf2e8de7/dom_md/ids:0
> 2017-07-30 10:49:31+0200 1766275 [10496]: verify_leader 1 wrong space
> name 0924ff77-ef51-435b-b90d-50bfbf2eke7 0924ff77-ef51-435b-b90d-
> 50bfbf2e8de7 /rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-
> ef51-435b-b90d-50bfbf2e8de7/dom_md/ids
> 2017-07-30 10:49:31+0200 1766275 [10496]: leader1 delta_acquire_begin
> error -226 lockspace 0924ff77-ef51-435b-b90d-50bfbf2e8de7 host_id 1
> 2017-07-30 10:49:31+0200 1766275 [10496]: leader2 path /rhev/data-
> center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-435b-b90d-
> 50bfbf2e8de7/dom_md/ids offset 0
> 2017-07-30 10:49:31+0200 1766275 [10496]: leader3 m 12212010 v 30003 ss
> 512 nh 0 mh 4076 oi 1 og 2031079063 lv 0
> 2017-07-30 10:49:31+0200 1766275 [10496]: leader4 sn 0924ff77-ef51-
> 435b-b90d-50bfbf2eke7 rn <93>7^\afa5-3a91-415b-a04c-
> 221d3e060163.vbgkvm01.a ts 4351980 cs eefa4dd7
> 2017-07-30 10:49:32+0200 1766276 [1171]: s310751 add_lockspace fail
> result -226
>
>
> vdsm logs doesnt have any errors and engine.log does not have any
> errors.
>
> And if i check the ids file manually. I can see that everything in it
> is correct except for the first host in the cluster where the space
> name and host id is broken.
>
>
> /Johan
>
> On Sun, 2017-07-30 at 11:18 +0300, Maor Lipchuk wrote:
>> Hi Johan,
>>
>> Can you please share the vdsm and engine logs.
>>
>> Also, it won't harm to also get the sanlock logs just in case sanlock
>> was configured to save all debugging in a log file (see
>> http://people.redhat.com/teigland/sanlock-messages.txt)).
>> Try to share the sanlock ouput by running  'sanlock client status',
>> 'sanlock client log_dump'.
>>
>> Regards,
>> Maor
>>
>> On Thu, Jul 27, 2017 at 6:18 PM, Johan Bernhardsson <jo...@kafit.se>
>> wrote:
>> >
>> > Hello,
>> >
>> > The ids file for sanlock is broken on one setup. The first host id
>> > in
>> > the file is wrong.
>> >
>> > From the logfile i have:
>> >
>> > verify_leader 1 wrong space name 0924ff77-ef51-435b-b90d-
>> > 50bfbf2e�ke7
>> > 0924ff77-ef51-435b-b90d-50bfbf2e8de7 /rhev/data-
>> > center/mnt/glusterSD/
>> >
>> >
>> >
>> > Note the broken char in the space name.
>> >
>> > This also apears. And it seams as the hostid too is broken in the
>> > ids
>> > file:
>> >
>> > leader4 sn 0924ff77-ef51-435b-b90d-50bfbf2e�ke7 rn ��7 afa5-3a91-
>> > 415b-
>> > a04c-221d3e060163.vbgkvm01.a ts 4351980 cs eefa4dd7
>> >
>> > Note the broken chars there as well.
>> >
>> > If i check the ids file with less or strings the first row where my
>> > vbgkvm01 host are. That has broken chars.
>> >
>> > Can this be repaired in some way without taking down all the
>> > virtual
>> > machines on that storage?
>> >
>> >
>> > /Johan
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploying training lab

2017-07-30 Thread Maor Lipchuk
If you are referring to a plain OS to run the laptops from on, I'm not
aware of any specific OS. maybe DSL (http://www.damnsmalllinux.org/)?

On Sun, Jul 30, 2017 at 1:01 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> What about ovirt-live?
>http://www.ovirt.org/download/ovirt-live/
>
> On Sun, Jul 30, 2017 at 12:40 PM, Andy Michielsen
> <andy.michiel...@gmail.com> wrote:
>> Hi Maor,
>>
>> Thanks for pitching in. I have to admit it isn't really an oVirt issue but 
>> more on how to allow users working with it.
>>
>> I do use the pool methode to provide the virtual servers and dektops.
>> For developers and support services I just let them use rdp for windows or 
>> even virt-viewer but for giving demo's or education I would like to provide 
>> something like thinclients or boot from usb linux solution who just start 
>> with firefox or virt-viewer.
>>
>> All idea's and suggestions are welcome.
>>
>> Thanks.
>>
>>> Op 30 jul. 2017 om 10:32 heeft Maor Lipchuk <mlipc...@redhat.com> het 
>>> volgende geschreven:
>>>
>>> Hi Andy,
>>>
>>> Can you please elaborate more regarding the requirements and what you
>>> are trying to achieve.
>>> Have you tried to use vm-pool? Those VMs should be configured as
>>> stateless and acquire minimal storage capacity.
>>>
>>> Regards,
>>> Maor
>>>
>>> On Fri, Jul 28, 2017 at 10:19 PM, Andy Michielsen
>>> <andy.michiel...@gmail.com> wrote:
>>>> Hello all,
>>>>
>>>> Don't know if this is the right place to ask this but I would like to set 
>>>> up
>>>> a trainingslab with oVirt.
>>>>
>>>> I have deployed an engine and a host with local storage and want to run 1
>>>> server and 5 desktops off it.
>>>>
>>>> But the desktops will be used on thin clients or old laptops with some
>>>> minimal os installation running spice client or a webbrowser.
>>>>
>>>> I was wondering if anyone can give me pointer in how to set up a minimal
>>>> laptop which only need to run an spice client.
>>>>
>>>> Kind regards.
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploying training lab

2017-07-30 Thread Maor Lipchuk
What about ovirt-live?
   http://www.ovirt.org/download/ovirt-live/

On Sun, Jul 30, 2017 at 12:40 PM, Andy Michielsen
<andy.michiel...@gmail.com> wrote:
> Hi Maor,
>
> Thanks for pitching in. I have to admit it isn't really an oVirt issue but 
> more on how to allow users working with it.
>
> I do use the pool methode to provide the virtual servers and dektops.
> For developers and support services I just let them use rdp for windows or 
> even virt-viewer but for giving demo's or education I would like to provide 
> something like thinclients or boot from usb linux solution who just start 
> with firefox or virt-viewer.
>
> All idea's and suggestions are welcome.
>
> Thanks.
>
>> Op 30 jul. 2017 om 10:32 heeft Maor Lipchuk <mlipc...@redhat.com> het 
>> volgende geschreven:
>>
>> Hi Andy,
>>
>> Can you please elaborate more regarding the requirements and what you
>> are trying to achieve.
>> Have you tried to use vm-pool? Those VMs should be configured as
>> stateless and acquire minimal storage capacity.
>>
>> Regards,
>> Maor
>>
>> On Fri, Jul 28, 2017 at 10:19 PM, Andy Michielsen
>> <andy.michiel...@gmail.com> wrote:
>>> Hello all,
>>>
>>> Don't know if this is the right place to ask this but I would like to set up
>>> a trainingslab with oVirt.
>>>
>>> I have deployed an engine and a host with local storage and want to run 1
>>> server and 5 desktops off it.
>>>
>>> But the desktops will be used on thin clients or old laptops with some
>>> minimal os installation running spice client or a webbrowser.
>>>
>>> I was wondering if anyone can give me pointer in how to set up a minimal
>>> laptop which only need to run an spice client.
>>>
>>> Kind regards.
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploying training lab

2017-07-30 Thread Maor Lipchuk
Hi Andy,

Can you please elaborate more regarding the requirements and what you
are trying to achieve.
Have you tried to use vm-pool? Those VMs should be configured as
stateless and acquire minimal storage capacity.

Regards,
Maor

On Fri, Jul 28, 2017 at 10:19 PM, Andy Michielsen
 wrote:
> Hello all,
>
> Don't know if this is the right place to ask this but I would like to set up
> a trainingslab with oVirt.
>
> I have deployed an engine and a host with local storage and want to run 1
> server and 5 desktops off it.
>
> But the desktops will be used on thin clients or old laptops with some
> minimal os installation running spice client or a webbrowser.
>
> I was wondering if anyone can give me pointer in how to set up a minimal
> laptop which only need to run an spice client.
>
> Kind regards.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] sanlock ids file broken after server crash

2017-07-30 Thread Maor Lipchuk
Hi Johan,

Can you please share the vdsm and engine logs.

Also, it won't harm to also get the sanlock logs just in case sanlock
was configured to save all debugging in a log file (see
http://people.redhat.com/teigland/sanlock-messages.txt)).
Try to share the sanlock ouput by running  'sanlock client status',
'sanlock client log_dump'.

Regards,
Maor

On Thu, Jul 27, 2017 at 6:18 PM, Johan Bernhardsson  wrote:
> Hello,
>
> The ids file for sanlock is broken on one setup. The first host id in
> the file is wrong.
>
> From the logfile i have:
>
> verify_leader 1 wrong space name 0924ff77-ef51-435b-b90d-50bfbf2e�ke7
> 0924ff77-ef51-435b-b90d-50bfbf2e8de7 /rhev/data-center/mnt/glusterSD/
>
>
>
> Note the broken char in the space name.
>
> This also apears. And it seams as the hostid too is broken in the ids
> file:
>
> leader4 sn 0924ff77-ef51-435b-b90d-50bfbf2e�ke7 rn ��7 afa5-3a91-415b-
> a04c-221d3e060163.vbgkvm01.a ts 4351980 cs eefa4dd7
>
> Note the broken chars there as well.
>
> If i check the ids file with less or strings the first row where my
> vbgkvm01 host are. That has broken chars.
>
> Can this be repaired in some way without taking down all the virtual
> machines on that storage?
>
>
> /Johan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt VM backups

2017-07-27 Thread Maor Lipchuk
On Thu, Jul 27, 2017 at 1:14 PM, Abi Askushi  wrote:
> Hi All,
>
> For VM backups I am using some python script to automate the snapshot ->
> clone -> export -> delete steps (although with some issues when trying to
> backups a Windows 10 VM)
>
> I was wondering if there is there any plan to integrate VM backups in the
> GUI or what other recommended ways exist out there.

Hi Abi,

Don't you want to use the backup API feature for that:
https://www.ovirt.org/develop/api/design/backup-api/

Regards,
Maor

>
> Thanx,
> Abi
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
On Tue, Jul 25, 2017 at 6:25 PM, Vinícius Ferrão <fer...@if.ufrj.br> wrote:
> Bug opened here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1474904


Thanks! Let's continue the discussion in the bug

>
> Thanks,
> V.
>
> On 25 Jul 2017, at 12:08, Vinícius Ferrão <fer...@if.ufrj.br> wrote:
>
> Hello Maor,
>
> Thanks for answering and looking deeper in this case. You’re welcome to
> connect to my machine since it’s reachable over the internet. I’ll be
> opening a ticket in moments. Just to feed an update here:
>
> I’ve done what you asked, but since I’m running Self Hosted Engine, I lost
> the connection to HE, here’s the CLI:
>
>
>
> Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3
>
>  node status: OK
>  See `nodectl check` for more information
>
> Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or
> https://146.164.37.103:9090/
>
> [root@ovirt3 ~]# iscsiadm -m session -u
> Logging out of session [sid: 1, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
> Logging out of session [sid: 4, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.12.14,3260]
> Logging out of session [sid: 7, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.12.14,3260]
> Logging out of session [sid: 5, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.11.14,3260]
> Logging out of session [sid: 6, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.11.14,3260]
> Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.11.14,3260] successful.
> Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.11.14,3260] successful.
> [root@ovirt3 ~]# service iscsid stop
> Redirecting to /bin/systemctl stop  iscsid.service
> Warning: Stopping iscsid.service, but it can still be activated by:
>  iscsid.socket
>
> [root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> [root@ovirt3 ~]# service iscsid start
> Redirecting to /bin/systemctl start  iscsid.service
>
> And finally:
>
> [root@ovirt3 ~]# hosted-engine --vm-status
> .
> .
> .
>
> It just hangs.
>
> Thanks,
> V.
>
> On 25 Jul 2017, at 05:54, Maor Lipchuk <mlipc...@redhat.com> wrote:
>
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>
> Hi Vinícius,
>
> For some reason it looks like your networks are both connected to the same
> IPs.
>
> based on the  VDSM logs:
> u'connectionParams':[
>{
>   u'netIfaceName':u'eno3.11',
>   u'connection':u'192.168.11.14',
>},
>{
>   u'netIfaceName':u'eno3.11',
>   u'connection':u'192.168.12.14',
>}
>   u'netIfaceName':u'eno4.12',
>   u'connection':u'192.168.11.14',
>},
>{
>   u'netIfaceName':u'eno4.12',
>   u'connection':u'192.168.12.14',
>}
> ],
>
> Can you try to reconnect to the iSCSI storage domain after
> re-initializing your iscsiadm on your host.
>
> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>
> 2. In your VDSM host, log out from your iscsi open sessions which are
> related to this storage domain
> if that is your only iSCSI storage domain log out from all the sessions:
>  "iscsiadm -m session -u"
>
> 3. Stop the iscsid service:
>  "service iscsid stop"
>
> 4. Move your network interfaces configured in the iscsiadm to a
> temporary folder:
>   mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> 5. Start the iscsid service
>  "service iscsid start"
>
> Regards,
> Maor and Benny
>
> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz <u...@laverenz.de> wrote:
>
> Hi,
>
>
> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>
> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
> trying to enable the feature without success too.
>
> Here’s wh

Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Can you please try to connect to your iSCSI server using iscsadm from
your VDSM Host, for example like so:
   iscsiadm -m node -T
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio -I eno3.11 -P
192.168.12.14,3260 --login

On Tue, Jul 25, 2017 at 11:54 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>> Hi Vinícius,
>>
>> For some reason it looks like your networks are both connected to the same 
>> IPs.
>>
>> based on the  VDSM logs:
>>   u'connectionParams':[
>>  {
>> u'netIfaceName':u'eno3.11',
>> u'connection':u'192.168.11.14',
>>  },
>>  {
>> u'netIfaceName':u'eno3.11',
>> u'connection':u'192.168.12.14',
>>  }
>> u'netIfaceName':u'eno4.12',
>> u'connection':u'192.168.11.14',
>>  },
>>  {
>> u'netIfaceName':u'eno4.12',
>> u'connection':u'192.168.12.14',
>>  }
>>   ],
>>
>> Can you try to reconnect to the iSCSI storage domain after
>> re-initializing your iscsiadm on your host.
>>
>> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>>
>> 2. In your VDSM host, log out from your iscsi open sessions which are
>> related to this storage domain
>> if that is your only iSCSI storage domain log out from all the sessions:
>>"iscsiadm -m session -u"
>>
>> 3. Stop the iscsid service:
>>"service iscsid stop"
>>
>> 4. Move your network interfaces configured in the iscsiadm to a
>> temporary folder:
>> mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>>
>> 5. Start the iscsid service
>>"service iscsid start"
>>
>> Regards,
>> Maor and Benny
>>
>> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz <u...@laverenz.de> wrote:
>>> Hi,
>>>
>>>
>>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>>
>>>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>>>> trying to enable the feature without success too.
>>>>
>>>> Here’s what I’ve done, step-by-step.
>>>>
>>>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>>>
>>>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>>>> eno3 with 9216 MTU.
>>>> eno4 with 9216 MTU.
>>>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>>>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>>>
>>>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>>>> different switches.
>>>
>>>
>>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>>> all network interfaces in the bond can connect/reach all targets, including
>>> those in the other net(s). The fact that you use separate, isolated networks
>>> means that this is not the case in your setup (and not in mine).
>>>
>>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>>> of this OVirt's iSCSI-Bonding does not work for us.
>>>
>>> Please see my mail from yesterday for a workaround.
>>>
>>> cu,
>>> Uwe
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Hi Vinícius,

I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly

Thanks,
Maor


On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> Hi Vinícius,
>
> For some reason it looks like your networks are both connected to the same 
> IPs.
>
> based on the  VDSM logs:
>   u'connectionParams':[
>  {
> u'netIfaceName':u'eno3.11',
> u'connection':u'192.168.11.14',
>  },
>  {
> u'netIfaceName':u'eno3.11',
> u'connection':u'192.168.12.14',
>  }
> u'netIfaceName':u'eno4.12',
> u'connection':u'192.168.11.14',
>  },
>  {
> u'netIfaceName':u'eno4.12',
> u'connection':u'192.168.12.14',
>  }
>   ],
>
> Can you try to reconnect to the iSCSI storage domain after
> re-initializing your iscsiadm on your host.
>
> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>
> 2. In your VDSM host, log out from your iscsi open sessions which are
> related to this storage domain
> if that is your only iSCSI storage domain log out from all the sessions:
>"iscsiadm -m session -u"
>
> 3. Stop the iscsid service:
>"service iscsid stop"
>
> 4. Move your network interfaces configured in the iscsiadm to a
> temporary folder:
> mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> 5. Start the iscsid service
>"service iscsid start"
>
> Regards,
> Maor and Benny
>
> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz <u...@laverenz.de> wrote:
>> Hi,
>>
>>
>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>
>>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>>> trying to enable the feature without success too.
>>>
>>> Here’s what I’ve done, step-by-step.
>>>
>>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>>
>>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>>> eno3 with 9216 MTU.
>>> eno4 with 9216 MTU.
>>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>>
>>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>>> different switches.
>>
>>
>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>> all network interfaces in the bond can connect/reach all targets, including
>> those in the other net(s). The fact that you use separate, isolated networks
>> means that this is not the case in your setup (and not in mine).
>>
>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>> of this OVirt's iSCSI-Bonding does not work for us.
>>
>> Please see my mail from yesterday for a workaround.
>>
>> cu,
>> Uwe
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.

based on the  VDSM logs:
  u'connectionParams':[
 {
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.11.14',
 },
 {
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.12.14',
 }
u'netIfaceName':u'eno4.12',
u'connection':u'192.168.11.14',
 },
 {
u'netIfaceName':u'eno4.12',
u'connection':u'192.168.12.14',
 }
  ],

Can you try to reconnect to the iSCSI storage domain after
re-initializing your iscsiadm on your host.

1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it

2. In your VDSM host, log out from your iscsi open sessions which are
related to this storage domain
if that is your only iSCSI storage domain log out from all the sessions:
   "iscsiadm -m session -u"

3. Stop the iscsid service:
   "service iscsid stop"

4. Move your network interfaces configured in the iscsiadm to a
temporary folder:
mv /var/lib/iscsi/ifaces/* /tmp/ifaces

5. Start the iscsid service
   "service iscsid start"

Regards,
Maor and Benny

On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
> Hi,
>
>
> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>
>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>> trying to enable the feature without success too.
>>
>> Here’s what I’ve done, step-by-step.
>>
>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>
>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>> eno3 with 9216 MTU.
>> eno4 with 9216 MTU.
>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>
>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>> different switches.
>
>
> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
> all network interfaces in the bond can connect/reach all targets, including
> those in the other net(s). The fact that you use separate, isolated networks
> means that this is not the case in your setup (and not in mine).
>
> I am not sure if this is a bug, a design flaw or a feature, but as a result
> of this OVirt's iSCSI-Bonding does not work for us.
>
> Please see my mail from yesterday for a workaround.
>
> cu,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SPM in case of Failure

2017-06-13 Thread Maor Lipchuk
Hi arsene,

See my comments inline

On Mon, Jun 12, 2017 at 1:02 PM, Arsène Gschwind
 wrote:
> Hi
>
> Our setup looks like:
>
> - 2 clusters in 2 different site connected with 10GBit LAN
> - Storage based on FC SAN replicated on both site and available for both
> site (The LUNs are available over 4 pathes, 2 from each site)
>
> My observation:
>
> In case one site goes down and this site owned SPM is it not possible to
> move or force SPM on the second site.

It could be a sanlock issue.
The SPM uses sanlock on the storage domain, so once the SPM host will
be rebooted and sanlock will be released from the storage domain (IINM
after 80 seconds) another Host can obtain a lock on that storage
domain and become the new SPM.
What is the message in the logs that you get when you try to do that?


> On the site which is down it's possible to reset all VMs that crashed using
> the "Confirm Host rebooted" menu on the oVirt Host but this does not reset
> SPM.
> The only solution I found was to bring the Host which owned SPM up again to
> be able to move it to the other site and then reactivate the storage
> domains.

I would try to attach the storage domain ( detach it first if it is
already attached) so you could register any VMs/Templates/Disks that
were added in the original env.

>
> Is this a normal behavior?
> Is there any way to force SPM reelection ?
>
> Thanks for your help or idea...
>
> Regards,
> Arsène
>
> --
>
> Arsène Gschwind
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
> Tel. +41 79 449 25 63  |  http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Maor Lipchuk
On Sun, Jun 4, 2017 at 8:51 PM, Abi Askushi <rightkickt...@gmail.com> wrote:
> I clean installed everything and ran into the same.
> I then ran gdeploy and encountered the same issue when deploying engine.
> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it
> has to do with alignment. The weird thing is that gluster volumes are all
> ok, replicating normally and no split brain is reported.
>
> The solution to the mentioned bug (1386443) was to format with 512 sector
> size, which for my case is not an option:
>
> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
> illegal sector size 512; hw sector is 4096
>
> Is there any workaround to address this?
>
> Thanx,
> Alex
>
>
> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi <rightkickt...@gmail.com> wrote:
>>
>> Hi Maor,
>>
>> My disk are of 4K block size and from this bug seems that gluster replica
>> needs 512B block size.
>> Is there a way to make gluster function with 4K drives?
>>
>> Thank you!
>>
>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>>>
>>> Hi Alex,
>>>
>>> I saw a bug that might be related to the issue you encountered at
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>>
>>> Sahina, maybe you have any advise? Do you think that BZ1386443is related?
>>>
>>> Regards,
>>> Maor
>>>
>>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi <rightkickt...@gmail.com>
>>> wrote:
>>> > Hi All,
>>> >
>>> > I have installed successfully several times oVirt (version 4.1) with 3
>>> > nodes
>>> > on top glusterfs.
>>> >
>>> > This time, when trying to configure the same setup, I am facing the
>>> > following issue which doesn't seem to go away. During installation i
>>> > get the
>>> > error:
>>> >
>>> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
>>> > lockspace add failure', 'Invalid argument'))
>>> >
>>> > The only different in this setup is that instead of standard
>>> > partitioning i
>>> > have GPT partitioning and the disks have 4K block size instead of 512.
>>> >
>>> > The /var/log/sanlock.log has the following lines:
>>> >
>>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>>> >
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>>> >
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>>> > for 2,9,23040
>>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>>> >
>>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>>> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader
>>> > offset
>>> > 127488 rv -22
>>> >
>>> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
>>> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
>>> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
>>> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>>> >
>>> > And /var/log/vdsm/vdsm.log says:
>>> >
>>> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available. (throttledlog:105)
>>> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available, KSM stats will be missing. (throttledlog:105)
>>> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>

Re: [ovirt-users] VM/Template copy issue

2017-06-04 Thread Maor Lipchuk
On Sat, Jun 3, 2017 at 6:20 AM, Bryan Sockel <bryan.soc...@altn.com> wrote:
> This happening to a number  of vm's. All vm's are running and can be stopped
> and re started.  We can read and write data within the vm.
>
> All vms are currently running on a single node gluster file system.  I am
> attempting to migrate to a replica 3 gluster file system when i exprience
> these issues.  The problem always seems to happen when finalizing the move
> or copy.
>
> If it makes a difference the gluster storage we are coping to and from are
> dedicated storage servers.
>
>
> ---- Original message 
> From: Maor Lipchuk <mlipc...@redhat.com>
> Date: 6/2/17 5:29 PM (GMT-06:00)
> To: Bryan Sockel <bryan.soc...@altn.com>
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] VM/Template copy issue
>
> 
> From : Maor Lipchuk [mlipc...@redhat.com]
> To : Bryan Sockel [bryan.soc...@altn.com]
> Cc : users@ovirt.org [users@ovirt.org]
> Date : Friday, June 2 2017 17:27:32
>
> Hi Bryan,
>
> It seems like there was an error from qemu-img while reading sector
> 143654878 .
>  the Image copy (conversion) failed with low level qemu-img failure:
>
> CopyImageError: low level Image copy failed:
> ("cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/nice',
> '-n', '19', '/usr/bin/ionice', '-c', '3', '/usr/bin/qemu-img',
> 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw',
> u'/rhev/data-center/d776b537-16f2-4543-bd96-9b4cba69e247/e371d380-7194-4950-b901-5f2aed5dfb35/images/9959c6e4-1fb7-455b-ad5e-8b9e2324a0ab/3f4e183b-7957-4bee-9153-9d967491f882',
> '-O', 'raw',
> u'/rhev/data-center/mnt/glusterSD/vs-host-colo-1-gluster.altn.int:_desktop-vdi1/f927ceb8-91d2-41bd-ba42-dc795395b6d0/images/9959c6e4-1fb7-455b-ad5e-8b9e2324a0ab/3f4e183b-7957-4bee-9153-9d967491f882'],
> ecode=1, stdout=, stderr=qemu-img: error while reading sector
> 143654878: No data available\n, message=None",)
>
> Can you verify those disks are indeed valid? Can you IO to them while
> attaching them to a running VM?
>
> On Tue, May 30, 2017 at 9:10 PM, Bryan Sockel  wrote:
>>
>> Hi,
>>
>> I am trying to rebuild my ovirt environment after having to juggle some
>> hardware around.  I am moving from hosted engine environment into a engine
>> install on a dedicated server.  I have two data centers in my setup and
>> each
>> DC has a non-routable vlan dedicated to storage.
>>
>> As i rebuild my setup i am trying to clean up my storage configuration.  I
>> am attempting to copy vm's and templates to a dedicated gluster setup.
>> However  each time i attempt to copy a template or move a vm, the
>> operation
>> fails.  The failure always happens when it is finalizing the copy.
>>
>> The operation does not happen with all vm's, but seems to happen mostly
>> with
>> vm's created from with in Ovirt, and not imported from vmware.
>>
>>
>> I have attached the logs from where i was trying to copy to templates to a
>> new Gluster Filesystem
>>
>> Thanks
>> Bryan
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>


I'm not sure if this is related to the hardware that was juggled, but
it seems like the volume has a bad sector and qemu-img reports it.
Do you have this volume in another storage domain maybe before the
hardware change, so we can eliminate the issue of hardware change.
You can also open a bug so it will be easier to track and investigate
it:  https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm
please also attach the engine and vdsm logs.

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot Image ID

2017-06-04 Thread Maor Lipchuk
Hi Marcelo.

Can you please elaborate a bit more about the issue.
Can you please also attach the engine and VDSM logs.

Thanks,
Maor

On Wed, May 31, 2017 at 10:08 AM, Sandro Bonazzola 
wrote:

>
>
> On Tue, May 23, 2017 at 4:25 PM, Marcelo Leandro 
> wrote:
>
>> I see now that image base id stay in snapshot the before of the last. I
>> think that should stay in the last. correct?
>>
>> Thanks,
>> Marcelo Leandro
>>
>> 2017-05-23 11:16 GMT-03:00 Marcelo Leandro :
>>
>>> Good morning,
>>> I have a problem with snapshot, my last snapshot that should have the
>>> image base ID not contain this reference in sub-tab in the dashboard.
>>> how can see in the picture attached.
>>> I think that is necessary add this reference in the database again. The
>>> image exist in the storage and it is used for VM execution, but i think
>>> when I shutdown the VM it not start anymore.
>>>
>>> someone help me?
>>>
>>> Thanks,
>>> Marcelo Leandro
>>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage domain does not exist

2017-06-04 Thread Maor Lipchuk
It seems related to the destroy of the storage domain since destroy
will still remove it from the engine if there is a problem.
Have you tried to restart VDSM, or reboot the Host?
There could be a problem with umount the storage domain.
Can you please send the VDSM logs

Regards,
Maor

On Thu, Jun 1, 2017 at 2:19 PM, Bryan Sockel  wrote:
> Hey,
>
> I am having an issue moving/copying vm's and templates around in my ovirt
> environment. I am getting the following error in my VDSM Logs:
>
> "2017-06-01 01:01:54,425-0500 ERROR (jsonrpc/5) [storage.Dispatcher]
> {'status': {'message': "Storage domain does not exist:
> (u'97258ca6-5acd-40c7-a812-b8cac02a9621',)", 'code': 358}} (dispatcher:77)"
>
> I believe this started when i destroyed a data domain instead of removing
> the domain correctly.  I have since then rebuilt my ovirt environment,
> importing my gluster domains back into my new setup.
>
> I believe the issue is related to some stale metadata on my gluster storage
> servers somewhere but do not know how to remove it or where it exists.
>
> I found these two posts that seem to deal with the same problem i am seeing.
>
> https://access.redhat.com/solutions/180623
> and
> https://access.redhat.com/solutions/2355061
>
> Currently running 2 dedicated gluster servers and 2 Ovirt VM hosts servers,
> one is acting as an arbiter for my replica 3 gluster file system.
>
> All hosts are running CentOS Linux release 7.3.1611 (Core)
>
> Thanks
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-04 Thread Maor Lipchuk
Hi Alex,

I saw a bug that might be related to the issue you encountered at
https://bugzilla.redhat.com/show_bug.cgi?id=1386443

Sahina, maybe you have any advise? Do you think that BZ1386443is related?

Regards,
Maor

On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi  wrote:
> Hi All,
>
> I have installed successfully several times oVirt (version 4.1) with 3 nodes
> on top glusterfs.
>
> This time, when trying to configure the same setup, I am facing the
> following issue which doesn't seem to go away. During installation i get the
> error:
>
> Failed to execute stage 'Misc configuration': Cannot acquire host id:
> (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))
>
> The only different in this setup is that instead of standard partitioning i
> have GPT partitioning and the disks have 4K block size instead of 512.
>
> The /var/log/sanlock.log has the following lines:
>
> 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
> 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
> for 2,9,23040
> 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
> a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids:0
> 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
> 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
> 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader offset
> 127488 rv -22
> /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
> 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
> 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
> 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>
> And /var/log/vdsm/vdsm.log says:
>
> 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
> [storage.StorageServer.MountConnection] Using user specified
> backup-volfile-servers option (storageServer:253)
> 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
> available. (throttledlog:105)
> 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
> available, KSM stats will be missing. (throttledlog:105)
> 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
> [storage.StorageServer.MountConnection] Using user specified
> backup-volfile-servers option (storageServer:253)
> 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock] Cannot
> initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
> (clusterlock:238)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 234, in initSANLock
> sanlock.init_lockspace(sdUUID, idsPath)
> SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
> endpoint is not connected')
> 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
> [storage.StorageDomainManifest] lease did not initialize successfully
> (sd:557)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
> self._domainLock.initLock(self.getDomainLease())
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 271, in initLock
> initSANLock(self._sdUUID, self._idsPath, lease)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 239, in initSANLock
> raise se.ClusterLockInitError()
> ClusterLockInitError: Could not initialize cluster lock: ()
> 2017-06-03 19:21:37,867+0200 ERROR (jsonrpc/2) [storage.StoragePool] Create
> pool hosted_datacenter canceled  (sp:655)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/sp.py", line 652, in create
> self.attachSD(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
> 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 971, in attachSD
> dom.acquireHostId(self.id)
>   File "/usr/share/vdsm/storage/sd.py", line 790, in acquireHostId
> self._manifest.acquireHostId(hostId, async)
>   File "/usr/share/vdsm/storage/sd.py", line 449, in acquireHostId
> self._domainLock.acquireHostId(hostId, async)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
> 297, in acquireHostId
> raise se.AcquireHostIdFailure(self._sdUUID, e)
> AcquireHostIdFailure: Cannot acquire host id:
> (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22, 'Sanlock
> lockspace add failure', 'Invalid argument'))
> 2017-06-03 19:21:37,870+0200 ERROR (jsonrpc/2) [storage.StoragePool] Domain
> ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047 

Re: [ovirt-users] VM/Template copy issue

2017-06-02 Thread Maor Lipchuk
Hi Bryan,

It seems like there was an error from qemu-img while reading sector 143654878 .
 the Image copy (conversion) failed with low level qemu-img failure:

CopyImageError: low level Image copy failed:
("cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/nice',
'-n', '19', '/usr/bin/ionice', '-c', '3', '/usr/bin/qemu-img',
'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw',
u'/rhev/data-center/d776b537-16f2-4543-bd96-9b4cba69e247/e371d380-7194-4950-b901-5f2aed5dfb35/images/9959c6e4-1fb7-455b-ad5e-8b9e2324a0ab/3f4e183b-7957-4bee-9153-9d967491f882',
'-O', 'raw', 
u'/rhev/data-center/mnt/glusterSD/vs-host-colo-1-gluster.altn.int:_desktop-vdi1/f927ceb8-91d2-41bd-ba42-dc795395b6d0/images/9959c6e4-1fb7-455b-ad5e-8b9e2324a0ab/3f4e183b-7957-4bee-9153-9d967491f882'],
ecode=1, stdout=, stderr=qemu-img: error while reading sector
143654878: No data available\n, message=None",)

Can you verify those disks are indeed valid? Can you IO to them while
attaching them to a running VM?

On Tue, May 30, 2017 at 9:10 PM, Bryan Sockel  wrote:
>
> Hi,
>
> I am trying to rebuild my ovirt environment after having to juggle some
> hardware around.  I am moving from hosted engine environment into a engine
> install on a dedicated server.  I have two data centers in my setup and each
> DC has a non-routable vlan dedicated to storage.
>
> As i rebuild my setup i am trying to clean up my storage configuration.  I
> am attempting to copy vm's and templates to a dedicated gluster setup.
> However  each time i attempt to copy a template or move a vm, the operation
> fails.  The failure always happens when it is finalizing the copy.
>
> The operation does not happen with all vm's, but seems to happen mostly with
> vm's created from with in Ovirt, and not imported from vmware.
>
>
> I have attached the logs from where i was trying to copy to templates to a
> new Gluster Filesystem
>
> Thanks
> Bryan
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty ovirt engine web pages

2017-04-18 Thread Maor Lipchuk
On Tue, Apr 18, 2017 at 10:50 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:

> Hi Shubham,
>
> In case you do not need the DWH setup, you can disable the Dashboard
> background queries via the ovirt-engine.conf.
>
> Please, add the following file to your $DEV_OVIRT_PREFIX folder:
>  "$DEV_OVIRT_PREFIX/etc/ovirt-engine/engine.conf.d/99-no-dashboard.conf"
> and add it the following content:
>
>  #
>  # Enable/disable updating the dashboard DB query caches at regular
> intervals.
>  #
>  DASHBOARD_CACHE_UPDATE=false
>
> This should stop the error message
>
> Regards,
> Maor
>

Deleted the extra copied info, it should be irrelevant.


>
> On Mon, Apr 17, 2017 at 3:50 PM, Alexander Wels <aw...@redhat.com> wrote:
>
>> On Friday, April 14, 2017 9:38:23 AM EDT Sandro Bonazzola wrote:
>> > Adding Martin
>> >
>>
>> That exception is telling you it can't run the queries needed to populate
>> the
>> dashboard. Highly likely you don't have the DWH properly
>> installed/configured.
>> That still shouldn't give you just a blank page, the code is written to
>> display an error stating unable to display dashboard or something of that
>> nature.
>>
>> > On Thu, Apr 13, 2017 at 5:51 AM, shubham dubey <sdubey...@gmail.com>
>> wrote:
>> > > Hello,
>> > > I have installed ovirt engine from source and installed all other
>> required
>> > > packages also,
>> > > including ovirt-js-dependencies.But when I am login to the admin
>> account I
>> > > am getting blank
>> > > page everytime. Some other pages are also coming empty sometime.
>> > > I have pasted the logs for $HOME/ovirt-engine/share/
>> > > ovirt-engine/services/ovirt-engine/ovirt-engine.py start[1].The
>> possible
>> > > error log is
>> > >
>> > > 2017-04-13 09:07:09,902+05 ERROR [org.ovirt.engine.ui.frontend.
>> > > server.dashboard.DashboardDataServlet.CacheUpdate.Utilization]
>> > > (EE-ManagedThreadFactory-default-Thread-1) [] Could not update the
>> > > Utilization Cache: Error while running SQL query:
>> > > org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataException:
>> > > Error while running SQL query
>> > >
>> > >at
>> > >org.ovirt.engine.ui.frontend.server.dashboard.dao.BaseDao.ru
>> nQuery
>> > >(BaseDao.java:60)>
>> > > [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.s
>> erver.dashboard.dao.HostDwhDao.
>> > >
>> > > getTotalCpuMemCount(HostDwhDao.java:78) [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > HourlySummaryHelper.getTotalCpuMemCount(HourlySummaryHelper.java:43)
>> > > [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > HourlySummaryHelper.getCpuMemSummary(HourlySummaryHelper.java:21)
>> > > [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > DashboardDataServlet.lookupGlobalUtilization(DashboardDataSe
>> rvlet.java:294
>> > > )
>> > > [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > DashboardDataServlet.getDashboard(DashboardDataServlet.java:268)
>> > > [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > DashboardDataServlet.populateUtilizationCache(DashboardDataS
>> ervlet.java:23
>> > > 1) [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > DashboardDataServlet.access$000(DashboardDataServlet.java:26)
>> > > [frontend.jar:]
>> > >
>> > >at org.ovirt.engine.ui.frontend.server.dashboard.
>> > >
>> > > DashboardDataServlet$1.run(DashboardDataServlet.java:106)
>> [frontend.jar:]
>> > >
>> > >at
>> > >java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java
>> > >:511)
>> > >
>> > > [rt.jar:1.8.0_121]
>> > >
>> > >at java.util.concurrent.FutureTas
>> k.runA

Re: [ovirt-users] Empty ovirt engine web pages

2017-04-18 Thread Maor Lipchuk
Hi Shubham,

In case you do not need the DWH setup, you can disable the Dashboard
background queries via the ovirt-engine.conf.

Please, add the following file to your $DEV_OVIRT_PREFIX folder:
 "$DEV_OVIRT_PREFIX/etc/ovirt-engine/engine.conf.d/99-no-dashboard.conf"
and add it the following content:

 #
 # Enable/disable updating the dashboard DB query caches at regular
intervals.
 #
 DASHBOARD_CACHE_UPDATE=false

This should stop the error message

Regards,
Maor


-- 
Scott Dickerson
Senior Software Engineer
RHEV-M Engineering - UX Team
Red Hat, Inc

___
Devel mailing list
de...@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/devel
Yanir Quinn yqu...@redhat.com via
 ovirt.org
12/12/16
to Scott, devel
setting DASHBOARD_CACHE_UPDATE=false indeed stops the issue.

Thanks.

___
Devel mailing list
de...@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/devel

On Mon, Apr 17, 2017 at 3:50 PM, Alexander Wels  wrote:

> On Friday, April 14, 2017 9:38:23 AM EDT Sandro Bonazzola wrote:
> > Adding Martin
> >
>
> That exception is telling you it can't run the queries needed to populate
> the
> dashboard. Highly likely you don't have the DWH properly
> installed/configured.
> That still shouldn't give you just a blank page, the code is written to
> display an error stating unable to display dashboard or something of that
> nature.
>
> > On Thu, Apr 13, 2017 at 5:51 AM, shubham dubey 
> wrote:
> > > Hello,
> > > I have installed ovirt engine from source and installed all other
> required
> > > packages also,
> > > including ovirt-js-dependencies.But when I am login to the admin
> account I
> > > am getting blank
> > > page everytime. Some other pages are also coming empty sometime.
> > > I have pasted the logs for $HOME/ovirt-engine/share/
> > > ovirt-engine/services/ovirt-engine/ovirt-engine.py start[1].The
> possible
> > > error log is
> > >
> > > 2017-04-13 09:07:09,902+05 ERROR [org.ovirt.engine.ui.frontend.
> > > server.dashboard.DashboardDataServlet.CacheUpdate.Utilization]
> > > (EE-ManagedThreadFactory-default-Thread-1) [] Could not update the
> > > Utilization Cache: Error while running SQL query:
> > > org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataException:
> > > Error while running SQL query
> > >
> > >at
> > >org.ovirt.engine.ui.frontend.server.dashboard.dao.BaseDao.
> runQuery
> > >(BaseDao.java:60)>
> > > [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.dao.
> HostDwhDao.
> > >
> > > getTotalCpuMemCount(HostDwhDao.java:78) [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > HourlySummaryHelper.getTotalCpuMemCount(HourlySummaryHelper.java:43)
> > > [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > HourlySummaryHelper.getCpuMemSummary(HourlySummaryHelper.java:21)
> > > [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > DashboardDataServlet.lookupGlobalUtilization(
> DashboardDataServlet.java:294
> > > )
> > > [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > DashboardDataServlet.getDashboard(DashboardDataServlet.java:268)
> > > [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > DashboardDataServlet.populateUtilizationCache(
> DashboardDataServlet.java:23
> > > 1) [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > DashboardDataServlet.access$000(DashboardDataServlet.java:26)
> > > [frontend.jar:]
> > >
> > >at org.ovirt.engine.ui.frontend.server.dashboard.
> > >
> > > DashboardDataServlet$1.run(DashboardDataServlet.java:106)
> [frontend.jar:]
> > >
> > >at
> > >java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java
> > >:511)
> > >
> > > [rt.jar:1.8.0_121]
> > >
> > >at java.util.concurrent.FutureTask.runAndReset(
> FutureTask.java:308)
> > >
> > > [rt.jar:1.8.0_121]
> > >
> > >at org.glassfish.enterprise.concurrent.internal.
> > >
> > > ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.
> access$201(
> > > ManagedScheduledThreadPoolExecutor.java:383)
> [javax.enterprise.concurrent-
> > > 1.0.jar:]
> > >
> > >at org.glassfish.enterprise.concurrent.internal.
> > >
> > > ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(
> > > ManagedScheduledThreadPoolExecutor.java:534)
> [javax.enterprise.concurrent-
> > > 1.0.jar:]
> > >
> > >at
> > >java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecut
> > >or.java:1142)>
> > > [rt.jar:1.8.0_121]
> > >
> > >at
> > >java.util.concurrent.ThreadPoolExecutor$Worker.run(
> 

Re: [ovirt-users] Interested in contributing to OVirt Community

2017-03-22 Thread Maor Lipchuk
Hi Venkat,

Nice to know you :) Thank you for your interest in the GSoC Project
idea " Configuring a Backup Storage for Ovirt  ".

The idea is to have an alternative for backing up entities like VMs,
Templates and disks from a storage domain.

I will be happy if you could apply to this GSOC project, although,
another student is already applied to this project so at the end we
will probably only choose one applicant.
On the other hand, you could take a look on the other oVirt GSOC
proposals which you might find interesting.

I suggest that for start, try to build a working oVirt development
setup with hosts and a few storage domains.

You can follow this wiki:
  
http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/

Once you have a working oVirt env we can go forward to the next step.
Please don't hesitate to ask if you have any questions.
You can also log in to the #ovirt IRC channel for any questions.

Regards,
Maor


-- Forwarded message --
From: Venkat Datta 
Date: Sat, Mar 18, 2017 at 7:19 PM
Subject: [ovirt-users] Interested in contributing to OVirt Community
To: users@ovirt.org


Hi Team ,
   I'm an Under Graduate Student doing my 4th year of
Engineering in PES University , India. I'm interested in working on
the GSoC Project idea " Configuring a Backup Storage for Ovirt  " .It
would be really helpful for me to get suggestions about how to get
started for this project.


Thanks,
Venkat Datta N H

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Contributing on ovirt

2017-03-05 Thread Maor Lipchuk
Hi shubham,

Thank you for your interest in the Configuring the backup storage as
part of the GOSC project.
The idea is to have an alternative for backing up entities like VMs,
Templates and disks from a storage domain.

oVirt has many methods of backup entities such as DB backup, import
storage domain and use of export storage domain.
With this new capability to define a storage domain as a backup it
might help oVirt to achieve an easier alternative solution for the
users to backup entities

I suggest that for start, try to build a working oVirt development
setup with hosts and a few storage domains.

You can follow this wiki:
  
http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/

Once you have a working oVirt env we can go forward to the next step.
Please don't hesitate to ask if you have any questions.
You can use the #ovirt channel also.

Regards,
Maor

On Fri, Mar 3, 2017 at 11:27 AM, shubham dubey  wrote:
> Hello,
> I am interested in being part of ovirt for gsoc 2017.
> I have looked into ovirt project ideas and the project that I find most
> suitable is Configuring the backup storage in ovirt.
>
> Since the ovirt online docs have sufficient info for getting started for
> development so I don't have a questions about that but I want to clarify one
> doubt that did the previous year mentioned project on ovirt gsoc page are
> also available to work on?
> I will also appreciate any discussion about the project or question from
> mentor side.Even some guideline for start working is welcome.
>
> Thanks,
> Shubham
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Info on oVirt 4.1 and qcow2 v3 format

2017-02-09 Thread Maor Lipchuk
On Thu, Feb 9, 2017 at 3:31 PM, Yaniv Kaul  wrote:

>
>
> On Thu, Feb 9, 2017 at 2:57 PM, Gianluca Cecchi  > wrote:
>
>> Hello,
>> after upgrading to oVirt 4.1 and setting cluster/datacenter to version 4,
>> new images and snapshots should be created in qcow2 v3 format, based on
>> bugzilla included in release notes:
>> https://bugzilla.redhat.com/827529
>> Correct?
>>
>> What about existing disks? Is there any way to convert?
>>
>
> See the 'amend' part in the feature description page at[1].
> Y.
>
> [1] http://www.ovirt.org/develop/release-management/features/sto
> rage/qcow2v3/
>
>
>> What in case I have an old disk, I create a snapshot in 4.1 and then
>> delete the snapshot? Will the live merged disk be in v3 format or v2?
>>
>
Since the merge snapshot will be done using the qemu-img commit
The qcow version of the active volume, after the merge, will be the same as
the old volume.

So, for example if you had a QCOW2 version 2 volume with compatibility
level of 0.10 and you create a new snapshot on DC 4.1 (That will make the
new QCOW volume to be with compatibility level of 1.1, QCOW2V3).
If afterwords you delete that snapshot, then the QCOW volume will be again,
the same as it was before, QCOW2 version 2 with compatibility level of 0.10.

As Yaniv mentioned above, you can use the amend operation to update the
QCOW version of all the volume's disk.


>
>> Thanks,
>> Gianluca
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Domain sizing

2017-01-23 Thread Maor Lipchuk
I know that there is a monitoring process which monitors each Storage
Domain, so I guess that if you will have multiple storage domains this will
make the monitor runs every time multiple times, but I think that the
affect is insignificant on the Host or the Storage Domain.

You should also consider the limitation of logical volumes in a volume
group,
IINM LVM has a default limit of 255 volumes (I'm not sure how it behaves
regarding efficiency if you use more volumes)
oVirt uses LV also for snapshots so with one storage domain you might get
to this limit pretty quick

On Mon, Jan 23, 2017 at 4:35 PM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> thx for you input
>
> both aren’t problems for me, all domains are from the same storage and so
> can’t be maintained independently.
>
>
>
> Are there performance problems with only one Domain (like waiting for
> locks etc.) which I don’t have that much with multiple?
>
>
>
> Thx Christian
>
>
>
>
>
> *Von:* Maor Lipchuk [mailto:mlipc...@redhat.com]
> *Gesendet:* Montag, 23. Jänner 2017 15:32
> *An:* Grundmann, Christian <christian.grundm...@fabasoft.com>
> *Cc:* users@ovirt.org
> *Betreff:* Re: [ovirt-users] Storage Domain sizing
>
>
>
> There are many factors that can be discussed on this issue,
>
> two things that pop up on my mind are that many storage domains will make
> your Data Center be more robust and flexible, since you can maintain part
> of the storage domains which could help with upgrading your storage server
> in the future or fixing issues that might occur in your storage, without
> moving your Data Center to a non operational state.
>
>
>
> One storage domain is preferrable if you want to use large disks with your
> VMs that small storage domains does not have the capacity to do so
>
>
>
> Regards,
>
> Maor
>
>
>
> On Mon, Jan 23, 2017 at 9:52 AM, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
> Hi,
>
> Ii am about to migrate to a new storage.
>
> Whats the best practice in sizing?
>
> 1 big Storage Domain or multiple smaller ones?
>
>
>
> My current Setup:
>
> 11 Hosts
>
> 7 FC Storage Domains 1 TB each
>
>
>
> Can anyone tell me the pro and cons of 1 vs. many?
>
>
>
>
>
> Thx Christian
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Domain sizing

2017-01-23 Thread Maor Lipchuk
There are many factors that can be discussed on this issue,
two things that pop up on my mind are that many storage domains will make
your Data Center be more robust and flexible, since you can maintain part
of the storage domains which could help with upgrading your storage server
in the future or fixing issues that might occur in your storage, without
moving your Data Center to a non operational state.

One storage domain is preferrable if you want to use large disks with your
VMs that small storage domains does not have the capacity to do so

Regards,
Maor

On Mon, Jan 23, 2017 at 9:52 AM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> Ii am about to migrate to a new storage.
>
> Whats the best practice in sizing?
>
> 1 big Storage Domain or multiple smaller ones?
>
>
>
> My current Setup:
>
> 11 Hosts
>
> 7 FC Storage Domains 1 TB each
>
>
>
> Can anyone tell me the pro and cons of 1 vs. many?
>
>
>
>
>
> Thx Christian
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] domain types for export and iso: no fcp and no iscsi

2017-01-22 Thread Maor Lipchuk
On Sun, Jan 22, 2017 at 3:26 PM, Nir Soffer <nsof...@redhat.com> wrote:

> On Sun, Jan 22, 2017 at 3:08 PM, Gianluca Cecchi
> <gianluca.cec...@gmail.com> wrote:
> > Il 22/Gen/2017 13:23, "Maor Lipchuk" <mlipc...@redhat.com> ha scritto:
> >
> >
> > That is indeed the behavior.
> > ISO and export are only supported through mount options.
> >
> >
> > Ok
> >
> >
> > I think that the goal is to replace those types of storage domains.
> >
> >
> > In which sense/way?
> >
> > For example the backup that the export storage domain is used for can be
> > used with a regular data storage domain since oVirt 3.6.
> >
> >
> > I have not understood ... can you detail this?

In some future version, we can replace iso and export domain with a regular
> data domain.
>
> For example, iso file will be uploaded to regular volume (on file or
> block). When
> you want to attach an iso to a vm, we can activate the volume and have the
> vm
> use the path to the volume.
>
> For export domain, we would could copy the disks to regular volumes. The vm
> metadata will be kept in the OVF_STORE of the domain.
>
> Or, instead of export, you can download the vm to ova file, and
> instead of import,
> upload the vm from ova file.
>


Here is an example how the user can migrate VMs/Templates between different
Data Centers with the "import storage domain" instead of using export
domain.

https://www.youtube.com/watch?v=DLcxDB0MY38=PL2NsEhIoqsJFsDWYKZ0jzJba11L_-xSds=4

so this is only one feature the export storage domain is used for today
that as of oVirt 3.5 can be done with any storage domain.
In the future we plan to enhance it to pre-define a certain storage domain
as a backup storage domain which will have features similar to the export
storage domain and more.


> > I have available a test environment with several San LUNs and no easy
> way to
> > connect to nfs or similar external resources... how could I provide
> export
> > and iso in this case? Creating a specialized vm and exporting from there,
> > even if it would be a circular dependency then..? Suggestions?
>
> Maybe upload image via the ui/REST?
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to attach a disk

2017-01-22 Thread Maor Lipchuk
Hi Fabrice,

Can you please attach the VDSM and engine logs

Thanks,
Maor

On Wed, Jan 18, 2017 at 5:05 PM, Fabrice Bacchella <
fabrice.bacche...@icloud.com> wrote:

> I upgraded an host to the latest version of vdsm:
> vdsm-4.18.21-1.el7.centos.x86_64, on a CentOS Linux release 7.3.1611
> (Core)
>
> I then created a disk that I wanted to attach to a running vm, but il
> fails, with the message in /var/log/libvirt/qemu/.log:
>
> Could not open '/rhev/data-center/17434f4e-8d1a-4a88-ae39-d2ddd46b3b9b/
> 7c5291d3-11e2-420f-99ad-47a376013671/images/4d33f997-
> 94b0-42c1-8052-5364993b85e9/8e613dd3-eebc-476a-a830-e2a8236ea8a8':
> Permission denied
>
> I tried to have a look at the disks images, and got a strange result:
>
> -rw-rw 1 vdsm qemu 1.0M May 18  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/ed18c515-09c9-4a71-af0a-7f0934193a65/
> b5e53c81-2279-4f2b-b282-69db430d36d4.lease
> -rw-rw 1 vdsm qemu 1.0M May 18  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/3a00232b-c1f9-4b9b-910e-caf8b0321609/
> 4f6d5c63-6a36-4356-832e-f52427d9512e.lease
> -rw-rw 1 vdsm qemu 1.0M May 23  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/b0f4c517-e492-409f-934f-1561281a242b/
> a3d60d8a-f89b-41dd-b519-fb652301b1f5.lease
> -rw-rw 1 vdsm qemu 1.0M May 23  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/465df4e9-3c62-4501-889f-cbab65ed0e0d/
> 7a9b9033-f5f8-4eaa-ac94-6cc0c4ff6120.lease
> -rw-rw 1 vdsm kvm  1.0M Jan  6 18:00 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/baf01c4e-ede9-4e4e-a265-172695d81a83/
> 4cdd72a7-b347-4479-accd-ab08d61552f9.lease
> -rw-rw 1 vdsm kvm  1.0M Jan 18 15:38 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/4d33f997-94b0-42c1-8052-5364993b85e9/
> 8e613dd3-eebc-476a-a830-e2a8236ea8a8.lease
>
> -rw-r--r-- 1 vdsm qemu 314 May 23  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/b0f4c517-e492-409f-934f-1561281a242b/
> a3d60d8a-f89b-41dd-b519-fb652301b1f5.meta
> -rw-r--r-- 1 vdsm kvm  314 Jan  6 18:00 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/465df4e9-3c62-4501-889f-cbab65ed0e0d/
> 7a9b9033-f5f8-4eaa-ac94-6cc0c4ff6120.meta
> -rw-r--r-- 1 vdsm kvm  307 Jan  6 18:00 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/baf01c4e-ede9-4e4e-a265-172695d81a83/
> 4cdd72a7-b347-4479-accd-ab08d61552f9.meta
> -rw-r--r-- 1 vdsm kvm  437 Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/ed18c515-09c9-4a71-af0a-7f0934193a65/
> b5e53c81-2279-4f2b-b282-69db430d36d4.meta
> -rw-r--r-- 1 vdsm kvm  437 Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/3a00232b-c1f9-4b9b-910e-caf8b0321609/
> 4f6d5c63-6a36-4356-832e-f52427d9512e.meta
> -rw-r--r-- 1 vdsm kvm  310 Jan 18 15:38 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/4d33f997-94b0-42c1-8052-5364993b85e9/
> 8e613dd3-eebc-476a-a830-e2a8236ea8a8.meta
>
> -rw-rw 1 vdsm qemu  16G Jan  9 17:25 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/baf01c4e-ede9-4e4e-a265-172695d81a83/
> 4cdd72a7-b347-4479-accd-ab08d61552f9
> -rw-rw 1 vdsm qemu  30K Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/ed18c515-09c9-4a71-af0a-7f0934193a65/
> b5e53c81-2279-4f2b-b282-69db430d36d4
> -rw-rw 1 vdsm qemu  30K Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/3a00232b-c1f9-4b9b-910e-caf8b0321609/
> 4f6d5c63-6a36-4356-832e-f52427d9512e
> -rw-rw 1 vdsm kvm  300G Jan 18 15:38 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/4d33f997-94b0-42c1-8052-5364993b85e9/
> 8e613dd3-eebc-476a-a830-e2a8236ea8a8
> -rw-rw 1 vdsm qemu  32G Jan 18 15:58 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/b0f4c517-e492-409f-934f-1561281a242b/
> a3d60d8a-f89b-41dd-b519-fb652301b1f5
> -rw-rw 1 vdsm qemu  32G Jan 18 15:58 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/465df4e9-3c62-4501-889f-cbab65ed0e0d/
> 7a9b9033-f5f8-4eaa-ac94-6cc0c4ff6120
>
>
> What a strange mix of group owner. Any explanation for that ? Is that a
> known bug ?
>
> The new disk is the 300G one, owned by kvm.
>
>
> 

Re: [ovirt-users] fast import to ovirt

2017-01-22 Thread Maor Lipchuk
Hi paf1,

Have you tried the import storage domain feature?
You can take a look how it is being done at
http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/
Work flow for Import File Storage Domain - UI flow
https://www.youtube.com/watch?v=YbU-DIwN-Wc

On Thu, Jan 19, 2017 at 11:44 AM, p...@email.cz  wrote:

> Hello,
> how can I import Vm from different ovirt envir..? There is no common mgmt
> ovirt. ( ovirt 3.5 -> 4.0 )
> Gluster FS used.
> Will ovirt accept "rsync" file migrations , meaning will update oVirt DB
> automaticaly  ?
> I'd prefer more quickly method then export-umount oV1-mount oV2-import .
>
> regards
> paf1
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Maor Lipchuk
On Sun, Jan 22, 2017 at 2:31 PM, Maor Lipchuk <mlipc...@redhat.com> wrote:

> Hi Bill,
>
> Can you please attach the engine and VDSM logs.
> Does the storage domain still stuck?
>

Also which oVirt version are you using?


>
> Regards,
> Maor
>
> On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill <jax2...@outlook.com> wrote:
>
>>
>>
>> Also cannot reinitialize the datacenter because the storage domain is
>> locked.
>>
>>
>>
>> *From: *Bill Bill <jax2...@outlook.com>
>> *Sent: *Friday, January 20, 2017 8:08 PM
>> *To: *users <users@ovirt.org>
>> *Subject: *RE: master storage domain stuck in locked state
>>
>>
>>
>> Spoke too soon. Some hosts came back up but the storage domain is still
>> locked so no vm’s can be started. What is the proper way to force this to
>> be unlocked? Each time we look to move into production after successful
>> testing, something like this always seems to pop up at the last minute
>> rending oVirt questionable in terms of reliability for some unknown issue.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Bill Bill <jax2...@outlook.com>
>> *Sent: *Friday, January 20, 2017 7:54 PM
>> *To: *users <users@ovirt.org>
>> *Subject: *RE: master storage domain stuck in locked state
>>
>>
>>
>>
>>
>> So apparently something didn’t change the metadata to master before
>> connection was lost. I changed the metadata role to master and it came
>> backup. Seems emailing in helped because every time I can’t figure
>> something out, email in a find it shortly after.
>>
>>
>>
>>
>>
>> *From: *Bill Bill <jax2...@outlook.com>
>> *Sent: *Friday, January 20, 2017 7:43 PM
>> *To: *users <users@ovirt.org>
>> *Subject: *master storage domain stuck in locked state
>>
>>
>>
>> No clue how to get this out. I can mount all storage manually on the
>> hypervisors. It seems like after a reboot oVirt is now having some issue
>> and the storage domain is stuck in locked state. Because of this, can’t
>> activate any other storage either, so the other domains are in maintenance
>> and the master sits in locked state, has been for hours.
>>
>>
>>
>> This sticks out on a hypervisor:
>>
>>
>>
>> StoragePoolWrongMaster: Wrong Master domain or its version:
>> u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, pool=3fd2ad92-e1eb-49c2-906d-0
>> 0ec233f610a'
>>
>>
>>
>> Not sure, nothing changed other than a reboot of the storage.
>>
>>
>>
>> Engine log shows:
>>
>>
>>
>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
>> (DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName
>> = U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true',
>> hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational',
>> nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE',
>> stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a
>>
>>
>>
>> No idea why it says unreachable, it certainly is because I can manually
>> mount ALL storage to the hypervisor.
>>
>>
>>
>> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
>> Windows 10
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Maor Lipchuk
Hi Bill,

Can you please attach the engine and VDSM logs.
Does the storage domain still stuck?

Regards,
Maor

On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill  wrote:

>
>
> Also cannot reinitialize the datacenter because the storage domain is
> locked.
>
>
>
> *From: *Bill Bill 
> *Sent: *Friday, January 20, 2017 8:08 PM
> *To: *users 
> *Subject: *RE: master storage domain stuck in locked state
>
>
>
> Spoke too soon. Some hosts came back up but the storage domain is still
> locked so no vm’s can be started. What is the proper way to force this to
> be unlocked? Each time we look to move into production after successful
> testing, something like this always seems to pop up at the last minute
> rending oVirt questionable in terms of reliability for some unknown issue.
>
>
>
>
>
>
>
> *From: *Bill Bill 
> *Sent: *Friday, January 20, 2017 7:54 PM
> *To: *users 
> *Subject: *RE: master storage domain stuck in locked state
>
>
>
>
>
> So apparently something didn’t change the metadata to master before
> connection was lost. I changed the metadata role to master and it came
> backup. Seems emailing in helped because every time I can’t figure
> something out, email in a find it shortly after.
>
>
>
>
>
> *From: *Bill Bill 
> *Sent: *Friday, January 20, 2017 7:43 PM
> *To: *users 
> *Subject: *master storage domain stuck in locked state
>
>
>
> No clue how to get this out. I can mount all storage manually on the
> hypervisors. It seems like after a reboot oVirt is now having some issue
> and the storage domain is stuck in locked state. Because of this, can’t
> activate any other storage either, so the other domains are in maintenance
> and the master sits in locked state, has been for hours.
>
>
>
> This sticks out on a hypervisor:
>
>
>
> StoragePoolWrongMaster: Wrong Master domain or its version:
> u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, pool=3fd2ad92-e1eb-49c2-906d-
> 00ec233f610a'
>
>
>
> Not sure, nothing changed other than a reboot of the storage.
>
>
>
> Engine log shows:
>
>
>
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName
> = U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true',
> hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational',
> nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE',
> stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a
>
>
>
> No idea why it says unreachable, it certainly is because I can manually
> mount ALL storage to the hypervisor.
>
>
>
> Sent from Mail  for
> Windows 10
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] domain types for export and iso: no fcp and no iscsi

2017-01-22 Thread Maor Lipchuk
On Sun, Jan 22, 2017 at 2:28 AM, Gianluca Cecchi 
wrote:

> Hello,
> I didn't notice before, but it seems that in 4.0.6 when I select "export"
> or "ISO" in Domain Function, then for Storage Type I can only select NFS,
> GlusterFS or POSIX compliant FS.
> If this is true, why this limitation? Any chance to change it, allowing
> FCP and iSCSI also for these kinds of domains?
>

Hi Gianluca,

That is indeed the behavior.
ISO and export are only supported through mount options.

I think that the goal is to replace those types of storage domains.
For example the backup that the export storage domain is used for can be
used with a regular data storage domain since oVirt 3.6.

Yaniv, do we have any open RFEs on our future plans?


>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-01-04 Thread Maor Lipchuk
On Wed, Jan 4, 2017 at 11:38 AM, Daniel Erez  wrote:

>
>
> On Wed, Jan 4, 2017 at 9:57 AM, Roy Golan  wrote:
>
>> I'm getting the feeling I'm not alone in this, authoring and publishing a
>> wiki page isn't as used to be for long time.
>>
>> I want to suggest a bit lighter workflow:
>>
>> 1.  Everyone can merge their page - (it's a wiki)
>>   Same as with (public and open) code, no one has the motivation to
>> publish a badly written
>>   wiki page under their name. True, it can have an impact, but not as
>> with broken code
>>
>>
> +1.
> Moreover, I think we shouldn't block any merging. Instead, wiki
> maintainers could act afterwards and revert when needed (Wikipedia style).
> Another issue is that (sadly) unlike mediawiki, we need to wait for wiki
> publish after a change. So I'd suggest to build and publish the wiki at
> least once a day. Any way, I think we should make the workflow much more
> intuitive and pleasant like the previous wiki - i.e. much less restrictive
> than manipulating a code base.
>

>
>> 2. Use Page-Status marker
>>  The author first merges the draft. Its now out there and should be
>> updated as time goes and its
>>  status is DRAFT. Maintainers will come later and after review would
>> change the status to
>>  PUBLISH. That could be a header in on the page:
>>  ---
>>  page status: DRAFT/PUBLISH
>>  ---
>>
>>  Simple I think, and should work.
>>
>>
 +1
The effort of maintaining the wiki today compare to how it used to be
before is much more cumbersome and problematic.
I think we can learn a lot from wikipedia workflow,
It is a much more inviting process where anyone can change the content
easily.
I'm not saying we should let any anonymous user change the wiki but even if
we make it easier in house we can achieve much more informative reliable
and updated wiki.


>
>>
>>
>> ___
>> Devel mailing list
>> de...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] anybody gave 4.1 beta a try?

2016-12-12 Thread Maor Lipchuk
On Thu, Dec 8, 2016 at 5:33 PM, Yaniv Kaul  wrote:

>
>
> On Wed, Dec 7, 2016 at 3:10 PM, Gianluca Cecchi  > wrote:
>
>> On Tue, Dec 6, 2016 at 9:38 AM, Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> any feedback on 4.1 beta we released last week?
>>> Thanks,
>>>
>>>
>>>
>> I see that in storage tab the NFS domain is marked as V4, while in 4.0.5
>> is marked as V3.
>> The nfs mount from host is still v3, but I think it is not related and
>> instead V4 refers to functionalities of storage domain itself...
>>
>
> Right.
>
>
>> In this case, where to find V3 vs V4 storage domain features?
>>
>
> http://www.ovirt.org/develop/release-management/features/
> storage/DataCenterV4_1/ - but it may need some updates.
> Y.
>

There is a more detailed feature page for qcow2v3 which is currently under
review, here it is:

https://github.com/maorlipchuk/ovirt-site/blob/cdbbfa5250af0e207ff151af67f188b3451d4c33/source/develop/release-management/features/storage/qcow2v3.html.md


>
>
>>
>>
>> Gianluca
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloning a VM with Ceph/Cinder based disk leaves disk in locked state

2016-11-22 Thread Maor Lipchuk
Hi Thomas,

That does looks like a bug, can u please open a new bug in Bugzilla at
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
Please attach it also the engine logs.

Thanks,
Maor


On Mon, Nov 21, 2016 at 12:00 PM, Thomas Klute  wrote:

> Dear oVirt Users,
>
> we're using cinder (based on the Kolla setup) to provide storage for
> ovirt. Everything works fine except the clone process of a VM.
> Cloning a VM with NFS based storage works as expected, thus I think it's
> the cinder integration that causes the problem here.
>
> When cloning a VM with cinder/ceph-based storage we see, that the VM
> clone is created, the attached image is cloned as well, but the
> disk/image remains in locked state. We then need to issue a
>
> "update images set imagestatus=1 where imagestatus=2;"
>
> on the engine to make the VM clone work.
>
> Is this a bug in the cinder integration?
>
> Thanks and best regards,
>  Thomas
>
> engine.log:
> 2016-11-21 10:00:20,216 INFO  [org.ovirt.engine.core.bll.CloneVmCommand]
> (default task-19) [2dd83801] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[vm-vertr-kp-klon= ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
> sharedLocks='[897d96c8-0ea9-4d06-b815-66a42b63c49b= ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>,
> e5e06033-e099-4efe-a5cd-9de2ebc0238b= ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName vm-vertr-kp-klon>]'}'
> 2016-11-21 10:00:21,290 INFO  [org.ovirt.engine.core.bll.CloneVmCommand]
> (default task-19) [] Running command: CloneVmCommand internal: false.
> Entities affected :  ID: 897d96c8-0ea9-4d06-b815-66a42b63c49b Type:
> VMAction group CREATE_VM with role type USER
> 2016-11-21 10:00:21,846 INFO
> [org.ovirt.engine.core.bll.storage.disk.cinder.
> CloneSingleCinderDiskCommand]
> (default task-19) [4c8e58b8] Running command:
> CloneSingleCinderDiskCommand internal: true. Entities affected :  ID:
> 1f342ea3-49f8-4f65-bf15-ce48514e9bd3 Type: StorageAction group
> CONFIGURE_VM_STORAGE with role type USER
> 2016-11-21 10:00:23,143 INFO
> [org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-19)
> [420ca5bf] Running command: AddGraphicsDeviceCommand internal: true.
> Entities affected :  ID: b9f78fd2-9a55-42f0-9ae9-7bca4ae93d9a Type:
> VMAction group EDIT_VM_PROPERTIES with role type USER
> 2016-11-21 10:00:23,151 INFO
> [org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-19)
> [7f72e8e4] Running command: AddGraphicsDeviceCommand internal: true.
> Entities affected :  ID: b9f78fd2-9a55-42f0-9ae9-7bca4ae93d9a Type:
> VMAction group EDIT_VM_PROPERTIES with role type USER
> 2016-11-21 10:00:23,222 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-19) [7f72e8e4] Correlation ID: 2dd83801, Job ID:
> a02e5069-a9ef-4b1b-8ec1-b1922d2e3135, Call Stack: null, Custom Event ID:
> -1, Message: VM vm-vertr-kp-klon was created by admin@internal-authz.
> 2016-11-21 10:00:23,240 INFO  [org.ovirt.engine.core.bll.CloneVmCommand]
> (default task-19) [7f72e8e4] Lock freed to object
> 'EngineLock:{exclusiveLocks='[vm-vertr-kp-klon= ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
> sharedLocks='[897d96c8-0ea9-4d06-b815-66a42b63c49b= ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>,
> e5e06033-e099-4efe-a5cd-9de2ebc0238b= ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName vm-vertr-kp-klon>]'}'
> 2016-11-21 10:00:29,283 INFO
> [org.ovirt.engine.core.bll.storage.disk.cinder.
> CloneSingleCinderDiskCommandCallback]
> (DefaultQuartzScheduler9) [4c8e58b8] Command 'CloneSingleCinderDisk' id:
> 'a76696d7-f698-4591-9a26-888f47462888' child commands '[]' executions
> were completed, status 'SUCCEEDED'
> 2016-11-21 10:00:29,284 INFO
> [org.ovirt.engine.core.bll.storage.disk.cinder.
> CloneSingleCinderDiskCommandCallback]
> (DefaultQuartzScheduler9) [4c8e58b8] Command 'CloneSingleCinderDisk' id:
> 'a76696d7-f698-4591-9a26-888f47462888' Updating status to 'SUCCEEDED',
> The command end method logic will be executed by one of its parent
> commands.
>
> Packets:
> ovirt-engine-jboss-as-7.1.1-1.el7.centos.x86_64
> ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
> ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
> ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
> ovirt-guest-agent-common-1.0.12-3.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.0.5.5-1.el7.centos.noarch
> ovirt-host-deploy-1.5.3-1.el7.centos.noarch
> ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-wildfly-10.1.0-1.el7.x86_64
> ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
> ovirt-vmconsole-1.0.4-1.el7.centos.noarch
> ovirt-release36-3.6.6-1.noarch
> ovirt-engine-lib-4.0.5.5-1.el7.centos.noarch
> ovirt-setup-lib-1.0.2-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.0.5.5-1.el7.centos.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.5.
> 

Re: [ovirt-users] Live merge speed

2016-11-06 Thread Maor Lipchuk
Hi Markus,

Can you try to use dd on a random file in that storage domain, What is the
performance it reflects?
Just want to make sure what is the origin of the problem, if  it is related
to the storage domain or the qemu process.

Regards,
Maor

On Thu, Nov 3, 2016 at 8:09 PM, Markus Stockhausen 
wrote:

> Hi there,
>
> in the past we already observed that live merge operations may take some
> time.
> Currently another of those operations is running on one of our VMs on a
> CentOS 7/qemu 2.3 node. Storage is (async - yeah I know about potential
> data
> loss) NFS over 10GBit network.
>
> The operation is running for 50 minutes now with an average throughput of
> 5MByte/sec. The disk (defined at 400GB) and its snapshots does not show
> much
> thin provisioning:
>
> ls -al
> -rw-rw.   1 vdsm kvm 429496729600  7. Apr 2016
> a204865e-ef57-41e0-a2ac-465a6c9c3d60
> -rw-rw.   1 vdsm kvm 203334418432  3. Nov 2016
> aeda833f-1f4b-4b6f-b553-578f1f3f06ee
> -rw-rw.   1 vdsm kvm 192091389952  3. Nov 2016
> c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
>
> du -m a204865e-ef57-41e0-a2ac-465a6c9c3d60
> 383070  a204865e-ef57-41e0-a2ac-465a6c9c3d60
> du -m aeda833f-1f4b-4b6f-b553-578f1f3f06ee
> 193953  aeda833f-1f4b-4b6f-b553-578f1f3f06ee
> du -m c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
> 183222  c8acdbc7-af24-4c5c-94c5-ae7262d98f5c
>
> Usually at some point the process is gaining speed and we see >100MByte/sec
> speed. Can anyone explain what might be going on.
>
> Best regards.
>
> Markus Stockhausen
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multiple Data Storage Domains

2016-11-06 Thread Maor Lipchuk
Hi Gary,

Do you have other disks on this storage domain?
Have you tried to use other VMs with disks on this storage domain?
Is this disk is preallocated? If not can you try to create a pre-allocate
disk and re-try

Regards,
Maor



On Sat, Nov 5, 2016 at 2:28 AM, Gary Pedretty  wrote:

> I am having an issue in a Hosted Engine GlusterFS setup.   I have 4 hosts
> in a cluster, with the Engined being hosted on the Cluster.  This follows
> the pattern shown in the docs for a glusterized setup, except that I have 4
> hosts.   I have engine, data, iso and export storage domains all as
> glusterfs on a replica 3 glusterfs on the first 3 hosts.  These gluster
> volumes are running on an SSD Hardware Raid 6, which is identical on all
> the hosts.  All the hosts have a second Raid 6 Array with Physical Hard
> Drives and I have created a second data storage domain as a glusterfs
> across all 4 hosts as a stripe 2 replica 2 and have added it to the Data
> Center.  However if I use this second Storage Domain as the boot disk for a
> VM, or as second disk for a VM that is already running, the VM will become
> non-responsive as soon as the VM starts using this disk.   Happens during
> the OS install if the VM is using this storage domain for its boot disk, or
> if I try copying anything large to it when it is a second disk for a VM
> that has its boot drive on the Master Data Storage Domain.
>
> If I mount the gluster volume that is this second storage domain on one of
> the hosts directly or any other machine on my local network, the gluster
> volume works fine.  It is only when it is used as a storage domain (second
> data domain) on VMs in the cluster.
>
> Once the vm becomes non-responsive it cannot be stopped, removed or
> destroyed without restarting the host machine that the VM is currently
> running on.   The 4 hosts are connected via 10gig ethernet, so should not
> be a network issue.
>
>
> Any ideas?
>
> Gary
>
> 
> Gary Pedrettyg...@ravnalaska.net
> 
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 to 4.0 Functionality Elevation

2016-11-06 Thread Maor Lipchuk
Hi Clint,
See my comments inline

Regards,
Maor

On Fri, Nov 4, 2016 at 1:51 PM, Clint Boggio 
wrote:

> Greetings OVirt Community
>
> I have a production system that started life as a OV 3.6 system, and I've
> recently upgraded the engine to 4.0. I see in the cluster config, a
> configuration element that indicates the cluster functionality level. It
> has a drop down that allows me to toggle between 3.6, and 4.0.
>
> If I select 4.0 and save, what potential problems could I expect ?
>
> Setup;
>
> 1 Dedicated engine CentOS7 OV version 4
>
> 6 hosts running CentOS7 and OV version 4.0 and 3.6 repos.
>

If your hosts are already EL7, you can take a look at the upgrade manager
https://www.ovirt.org/develop/release-management/features/engine/
upgrademanager/
see also the following demo video:
https://www.youtube.com/watch?v=fDzBNKu5pyQ=youtu.be

Otherwise you'll have to reinstall hosts with EL7 one by one.

After all the hosts were upgraded you simply need to change all your
clusters' compatibility level to 4.0 and finally the Data Center's
compatibility level to 4.0.


>
> Main data domain is iscsi over IB.
>
> ISO and all other data domains are  NFS over IB
>
>
Regarding your Storage Domains (ISO, NFS, iSCSI) there shoul not be any
additional changes as far as I know since all of them will use the same
version (V3) for engine 3.6 and 4.0.


>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem moving master storage domain to maintenance

2016-11-03 Thread Maor Lipchuk
Hi kasturi,

Which version of oVirt are you using?
Roy, I assume it is related to 4.0 version where the import of hosted
storage domain was introduced. Care to share your insight about it?

Regards,
Maor


On Thu, Nov 3, 2016 at 12:23 PM, knarra  wrote:

> Hi,
>
> I have three storage domains backed by gluster in my environment
> (hostedstorage, data and vmstore). I would want to move the storage domains
> into maintenance. Have couple of questions here.
>
> 1) Will moving master storage domain into maintenance have some impact on
> hostedstorage?
>
> 2) I see that moving master storage domain into maintenance causes
> HostedEngine VM to restart and moves hosted_storage from active to Unknown
> state. Is this expected?
>
> 3) master storage domain remains in  "Preparing for Maintenance" and i see
> the following exceptions in the engine.log.
>
> 2016-11-03 06:22:10,988 ERROR 
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (DefaultQuartzScheduler6) [2d534f09] IrsBroker::Failed::GetStoragePoolInfoVDS:
> IRSGenericException: IRSErrorException: IRSNoMasterDomainException: Wrong
> Master domain or its version: u'SD=08aba92e-e685-45d7-b03f-85d9678ecc9b,
> pool=581999ef-02aa-0272-0334-0159'
> 2016-11-03 06:22:11,001 WARN [org.ovirt.engine.core.bll.sto
> rage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-6-thread-24)
> [210d2f12] Validation of action 'ReconstructMasterDomain' failed for user
> SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTE
> R,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status
> PreparingForMaintenance
>
> Thanks
>
> kasturi.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Engine Upgrade 3.6 to 4.0.4 - Next Steps?

2016-11-03 Thread Maor Lipchuk
Hi Daniel,

The upgrade process should be performed by upgrading all your clusters
first to 4.x and after all your clusters were upgraded then the Data Center
should be also upgraded to the desired version.

The hosts might also need to be upgraded by yum update.
Sandro, correct me if I'm wrong, is there a wiki regarding the Host upgrade
process?

On Thu, Nov 3, 2016 at 12:13 AM, Beckman, Daniel <
daniel.beck...@ingramcontent.com> wrote:

> So I’ve successfully upgraded my oVirt engine to 4.0.4, but my hosts are
> still running oVirt node 3.6 and the cluster and data center is still in
> 3.6 compatibilty mode. All of the oVirt documentation I’ve found is
> referencing 3.x. Can someone point me to updated documentation – does it
> exist? I found the equivalent documentation for the commercial product,
> RHV, but it (https://access.redhat.com/documentation/en/red-hat-
> virtualization/4.0/paged/upgrade-guide/32-upgrading-to-
> red-hat-virtualization-manager-40) doesn’t really address upgrading hosts
> from 3.6 to 4.x. Do 3.6 hosts have to be removed, wiped, and rebuilt from
> scratch? Or can they be upgraded to 4.x from the manager, or by booting
> from the 4.x ISO?
>
>
>
> Thanks,
>
> Daniel
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can not use iscsi storage type on ovirt andGlusterfs hyper-converged environment

2016-11-02 Thread Maor Lipchuk
Thanks for the logs,

What kind of VDSM version are you using?
"rpm -q vdsm"
There seems to be a similar issue which was reported recently in the VDSM
area
(see https://bugzilla.redhat.com/show_bug.cgi?id=1197292)
It should be fixed in later versions of VDSM vdsm-4.16.12-2.el7ev.x86_64
Adding also Nir and Jeff to the thread, if they have any insights

Regards,
Maor

On Wed, Nov 2, 2016 at 4:11 AM, 胡茂荣 <maorong...@horebdata.cn> wrote:

>
>  Hi Maor:
>   vdsm/supervdsm/engine log on attachment .  I mkfs.xfs the lun block
> device and mount to /mnt , dd write  ,dmesg not report error ,dd result is
> ok :
>
> /dev/sdi  50G   33M   50G   1% /mnt
>
> [root@horebc mnt]# for i in `seq 3`; do dd if=/dev/zero of=./file   bs=1G
> count=1 oflag=direct ; done
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB) copied, 13.3232 s, 80.6 MB/s
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB) copied, 9.89988 s, 108 MB/s
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB) copied, 14.0143 s, 76.6 MB/s
>
>my envirnment  have three  network segments (hosts have 3 network
> segments ) :
>engine  and glusterfs mount : 192.168.11.X/24
> glusterfs brick : 192.168.10.x/24
> iscsi : 192.168.1.0/24
>
> and I add 192.168.1.0/24 to engine vm ,  ovirt web UI report the same
> error .
>
>  humaorong
>   2016-11-2
>
> -- Original --
> *From: * "Maor Lipchuk"<mlipc...@redhat.com>;
> *Date: * Tue, Nov 1, 2016 08:14 PM
> *To: * "胡茂荣"<maorong...@horebdata.cn>;
> *Cc: * "users"<users@ovirt.org>;
> *Subject: * Re: [ovirt-users] can not use iscsi storage type on ovirt
> andGlusterfs hyper-converged environment
>
> Hi 胡茂荣Can u please also add the VDSM and engine logs.
> If you try discover and connect to those luns directly from your Host does
> it work?
>
> Regards,
> Maor
>
>
> On Tue, Nov 1, 2016 at 6:12 AM, 胡茂荣 <maorong...@horebdata.cn> wrote:
>
>>
>>
>> on ovirt and Glusterfs hyper-converged environment , can not use
>> iscsi storage type , UI report error: "Could not retrieve LUNs, please
>> check your storage." , vdsm report :"VDSM hosted_engine_3 command
>> failed: Error block device action: ()" .
>> but this block device alse login on centos 7 host :
>> =
>>
>> ## lsscsi
>>
>> [7:0:0:0]diskSCST_BIO DEVFOR_OVIRT_rbd  221  /dev/sdi
>>
>>   ## dmesg :
>>
>> [684521.131186] sd 7:0:0:0: [sdi] Attached SCSI disk
>>
>> ===---
>>
>>###vdsm or supervdsm log  report :
>>
>> MainProcess|jsonrpc.Executor/7::ERROR::2016-11-01
>> 11:07:00,178::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
>> Error in getPathsStatus
>>
>> MainProcess|jsonrpc.Executor/4::ERROR::2016-11-01
>> 11:07:20,964::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
>> Error in getPathsStatus
>>
>>jsonrpc.Executor/4::DEBUG::2016-11-01 
>> 11:07:04,251::iscsi::434::Storage.ISCSI::(rescan)
>> Performing SCSI scan, this will take up to 30 seconds
>>
>> jsonrpc.Executor/5::INFO::2016-11-01 11:07:19,413::iscsi::567::Stor
>> age.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided.
>> Skipping.
>>
>> 11:09:15,753::iscsiadm::119::Storage.Misc.excCmd::(_runCmd)
>> /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/iscsiadm -m
>> session -R (cwd None)
>>
>> ==
>>
>>  the other info please the attachment "bug-info.doc".
>>
>>  this prolem on ovirt3.6 and 4.X  ovirt and Glusterfs
>> hyper-converged environment . how can I use iscsi storage type on ovirt
>> and Glusterfs hyper-converged environment .Please help me !
>>
>> humaorong
>>
>>2016-11-1
>>
>>
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can not use iscsi storage type on ovirt and Glusterfs hyper-converged environment

2016-11-01 Thread Maor Lipchuk
Hi 胡茂荣Can u please also add the VDSM and engine logs.
If you try discover and connect to those luns directly from your Host does
it work?

Regards,
Maor


On Tue, Nov 1, 2016 at 6:12 AM, 胡茂荣  wrote:

>
>
> on ovirt and Glusterfs hyper-converged environment , can not use iscsi
> storage type , UI report error: "Could not retrieve LUNs, please check
> your storage." , vdsm report :"VDSM hosted_engine_3 command failed: Error
> block device action: ()" .
> but this block device alse login on centos 7 host :
> =
>
> ## lsscsi
>
> [7:0:0:0]diskSCST_BIO DEVFOR_OVIRT_rbd  221  /dev/sdi
>
>   ## dmesg :
>
> [684521.131186] sd 7:0:0:0: [sdi] Attached SCSI disk
>
> ===---
>
>###vdsm or supervdsm log  report :
>
> MainProcess|jsonrpc.Executor/7::ERROR::2016-11-01
> 11:07:00,178::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
> Error in getPathsStatus
>
> MainProcess|jsonrpc.Executor/4::ERROR::2016-11-01
> 11:07:20,964::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
> Error in getPathsStatus
>
>jsonrpc.Executor/4::DEBUG::2016-11-01 
> 11:07:04,251::iscsi::434::Storage.ISCSI::(rescan)
> Performing SCSI scan, this will take up to 30 seconds
>
> jsonrpc.Executor/5::INFO::2016-11-01 11:07:19,413::iscsi::567::
> Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not
> provided. Skipping.
>
> 11:09:15,753::iscsiadm::119::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/iscsiadm -m
> session -R (cwd None)
>
> ==
>
>  the other info please the attachment "bug-info.doc".
>
>  this prolem on ovirt3.6 and 4.X  ovirt and Glusterfs hyper-converged
> environment . how can I use iscsi storage type on ovirt and Glusterfs
> hyper-converged environment .Please help me !
>
> humaorong
>
>2016-11-1
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to import a qcow2 disk into ovirt

2016-08-29 Thread Maor Lipchuk
Hi lifuqiong,

There are several ways to import disks into oVirt

Does the disk contains any snapshots?
if not, the disk file can be copied to the storage domain and you can
register it using the Register button (see
https://bugzilla.redhat.com/show_bug.cgi?id=1138139)

You can also take a look at the image-uploader, see
http://www.ovirt.org/develop/release-management/features/storage/image-upload/

What is the use case that you want to do? What is the origin of the disk
(Was it an oVirt disk?), as asked before, does the disk includes any
snapshots.

Regards,
Maor


On Mon, Aug 29, 2016 at 3:40 PM, lifuqiong  wrote:

> Hi,
>
>  How to import a qcow2 disk file into ovirt? I search the Internet
> for a long time , but find no solution work.
>
>
>
> Thank you
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Attaching a physical DVD drive

2016-08-23 Thread Maor Lipchuk
-
On Tue, Aug 23, 2016 at 4:32 PM, Niksa Baldun <niksa.bal...@gmail.com>
wrote:

> Hi Maor,
>
> thanks for the reply. So, to be clear, it is not possible to install from
> physical DVD drive?
>

Indeed


> I have to copy it to hard drive and include it in storage domain?
>

Using the ISO Storage Domain is one way to solve this.
You might also use glance or image uploader, but you still won't get away
from the copy operation



> Regards,
>
> Nikša
>
> On 23 August 2016 at 15:24, Maor Lipchuk <mlipc...@redhat.com> wrote:
>
>> Hi Niksa,
>>
>> If you can make the DVD to be used as an ISO file and use it as part of
>> an ISO Storage Domain and boot your VM from it
>>
>> Edit the Virtual Machine -> go to "boot options" -> mark "Attach CD" and
>> pick the ISO you want to use when booting the VM.
>>
>> See http://www.ovirt.org/documentation/quickstart/quickstart
>> -guide/#Attach_an_ISO_domain for more details
>>
>> Regards,
>> Maor
>>
>> On Tue, Aug 23, 2016 at 4:09 PM, Niksa Baldun <niksa.bal...@gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> sorry if this is a stupid question, but how do I install OS in a VM from
>>> a physical DVD inserted in drive on host? Google tells me nothing.
>>>
>>> Thanks.
>>>
>>>
>>> Nikša
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Attaching a physical DVD drive

2016-08-23 Thread Maor Lipchuk
Hi Niksa,

If you can make the DVD to be used as an ISO file and use it as part of an
ISO Storage Domain and boot your VM from it

Edit the Virtual Machine -> go to "boot options" -> mark "Attach CD" and
pick the ISO you want to use when booting the VM.

See
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Attach_an_ISO_domain
for more details

Regards,
Maor

On Tue, Aug 23, 2016 at 4:09 PM, Niksa Baldun 
wrote:

> Hello,
>
> sorry if this is a stupid question, but how do I install OS in a VM from a
> physical DVD inserted in drive on host? Google tells me nothing.
>
> Thanks.
>
>
> Nikša
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt becomes unusable if master storage domain lost

2016-08-23 Thread Maor Lipchuk
Hi Bill,

Have you tried to re-initialize the Data Center with a new Storage Domain.
Add a new Storage Domain to the engine and keep it detached from any Data
Center.
Right click on the Data Center and choose "Re-Initialize Data Center"

Regards,
Maor

On Tue, Aug 23, 2016 at 5:00 AM, Bill Bill  wrote:

>
>
> Hello,
>
>
>
> In testing fault tolerance/recovery procedures if the master storage
> domain is lost/removed, ovirt becomes pretty much unusable. You cannot add
> new storage as oVirt will not mount it, even if you can mount it manually
> to the host. If you force remove the old failed storage domain via the
> engine database, you’re essentially screwed since it will fail to mount ANY
> new storage.
>
>
>
> Is there some workaround for this so that if the original master storage
> domain becomes damaged, fails, goes offline etc that oVirt is not so
> dependent on this OR another storage can continue to be added?
>
>
>
> Right now, I have an oVirt datacenter with multiple networks, redundant
> nic’s etc that that is going to have to be completely removed, reconfigured
> and then set up from scratch again essentially due to the master storage
> domain issue.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move from Local SD to Shared

2016-07-17 Thread Maor Lipchuk
Hi Alexandr,

Does the storage domain's server supports NFS or posix?
If so, you can create a new shared DC and destroy the old local DC (without
formatting the local SD) and then, try to import this SD as a shared
storage domain.

Regards,
Maor


On Fri, Jul 15, 2016 at 5:10 PM, Alexandr Krivulya 
wrote:

> Hi,
>
> I need to move my datacenter from local storage domain to shared (nfs or
> posix) without destroying storage. What is the best way to do it in
> oVirt 3.6?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Maor Lipchuk
Hi Charles,

Currently, oVirt communicates with Ceph only through Cinder.
If you want to avoid using Cinder perhaps you can try to use cephfs and
mount it as a posix storage domain instead.
Regarding Cinder appliance, it is not yet implemented though we are
currently investigating this option.

Regards,
Maor

On Fri, Jun 24, 2016 at 11:23 PM, Charles Gomes 
wrote:

> Hello
>
>
>
> I’ve been reading lots of material about implementing oVirt with Ceph,
> however all talk about using Cinder.
>
> Is there a way to get oVirt with Ceph without having to implement entire
> Openstack ?
>
> I’m already currently using Foreman to deploy Ceph and KVM nodes, trying
> to minimize the amount of moving parts. I heard something about oVirt
> providing a managed Cinder appliance, have any seen this ?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host installation failed while creating a new host

2016-06-19 Thread Maor Lipchuk
Can u please add also the VDSM and engine logs.

Adding also Martin to the thread,
Martin ,could it be a dup of https://bugzilla.redhat.com/1320128 ?

Regards,
Maor



On Sat, Jun 18, 2016 at 4:08 AM, Dewey Du  wrote:

> I got the error "Host  installation failed. Failed to configure
> management network on the host.", when creating a new host.
>
> Attached the supervdsm.log on the host.
>
> restore-net::INFO::2016-06-18
> 08:26:22,018::netconfpersistence::62::root::(setNetwork) Adding network
> ovirtmgmt({u'hostQos': {u'out': {u'ls': {u'm2': 50}}}, 'nic': u'em1',
> u'ipaddr': u'10.0.100.17', u'switch': u'legacy', u'mtu': 1500, u'netmask':
> u'255.255.255.0', u'STP': u'no', u'bridged': u'true', u'gateway':
> u'10.0.100.2', u'defaultRoute': True})
> restore-net::DEBUG::2016-06-18
> 08:26:22,028::netinfo::735::root::(_get_gateway) The gateway 10.0.100.2 is
> duplicated for the device ovirtmgmt
> restore-net::INFO::2016-06-18
> 08:26:22,028::netconfpersistence::187::root::(_clearDisk) Clearing
> /var/run/vdsm/netconf/nets/ and /var/run/vdsm/netconf/bonds/
> restore-net::DEBUG::2016-06-18
> 08:26:22,028::netconfpersistence::195::root::(_clearDisk) No existent
> config to clear.
> restore-net::INFO::2016-06-18
> 08:26:22,028::netconfpersistence::131::root::(save) Saved new config
> RunningConfig({'ovirtmgmt': {u'hostQos': {u'out': {u'ls': {u'm2': 50}}},
> 'nic': u'em1', u'ipaddr': u'10.0.100.17', u'switch': u'legacy', u'mtu':
> 1500, u'netmask': u'255.255.255.0', u'STP': u'no', u'bridged': u'true',
> u'gateway': u'10.0.100.2', u'defaultRoute': True}}, {}) to
> /var/run/vdsm/netconf/nets/ and /var/run/vdsm/netconf/bonds/
> restore-net::INFO::2016-06-18
> 08:26:22,029::vdsm-restore-net-config::447::root::(restore) restoration
> completed successfully.
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Will Shared storage distributes Compute and Storage Resources?

2016-06-19 Thread Maor Lipchuk
On Sun, Jun 19, 2016 at 7:08 AM, Dewey Du  wrote:

> I use GlusterFS as my Shared storage to hold the Virtual Machines. I
> confuse what does Host ( oVirt Node) do. Will the VMs use the Compute
> Resources of Host (oVirt Node), such as CPU, Memory? And GlusterFS servers
> are used for store the VM images?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
Basically yes.
All the storage domains in the setup will be used to store the disks of the
VMs/Templates, and the hosts will be used to run the VMs
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unregistered disks - snapshot images present(Was Re: Import storage domain - disks not listed)

2016-05-08 Thread Maor Lipchuk
On Fri, May 6, 2016 at 3:08 PM, Sahina Bose <sab...@redhat.com> wrote:

> Hi,
>
> Back to Import Storage domain questions -
>
> To ensure consistency, we snapshot the running VMs prior to replicating
> the gluster volume to a central site. The VMs have been setup such that OS
> disks are on a gluster volume called "vmstore" and non-OS disks are on a
> gluster volume called "data"
> For back up , we are only interested in "data" - so only this storage
> domain is imported at backup recovery site.
>
> Since the "data" storage domain does not contain all disks - VMs cannot be
> imported - that's expected.
>
> To retrieve the backed up disks - since the disk has snapshot - this does
> not seem possible either. How do we work around this?
>

Since the engine can't support floating disks with snapshot you can try to
merge those snapshots (by using qemu-img merge and qemu-img rebase.) until
there will be only one volume left.
Once there will be only one volume oVirt can import the disk.

Another workaround might be that if you still have the setup which could
not import the VM because of missing volume, then, you can try manipulate
the OVF in the Data Base.
The VMs' and Template's XML representation are contained in the
"unregistered_ovf_of_entities" table.
You can fetch the VM's OVF from that table:
  SELECT ovf_data FROM unregistered_ovf_of_entities where entity_name ilike
'%(the VM name (You can also fetch by id with entity_id))%';

Update the OVF with the correlated volume ids in the storage server. After
updating this, the import of the VM should work



>
> thanks!
>
>
>
> On 05/03/2016 03:21 PM, Maor Lipchuk wrote:
>
>> There is this bug https://bugzilla.redhat.com/1270562 which forces
>> OVF_STORE update but it is not yet implemented.
>>
>> On Tue, May 3, 2016 at 8:52 AM, Sahina Bose <sab...@redhat.com> wrote:
>>
>>>
>>> On 05/02/2016 09:36 PM, Maor Lipchuk wrote:
>>>
>>>> On Mon, May 2, 2016 at 5:45 PM, Sahina Bose <sab...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 05/02/2016 05:57 PM, Maor Lipchuk wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Mon, May 2, 2016 at 1:08 PM, Sahina Bose <sab...@redhat.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <sab...@redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>>>>>>>
>>>>>>> Hi Sahina,
>>>>>>>
>>>>>>> The disks with snapshots should be part of the VMs, once you will
>>>>>>> register those VMs you should see those disks in the disks sub tab.
>>>>>>>
>>>>>>>
>>>>>>> Maor,
>>>>>>>
>>>>>>> I was unable to import VM which prompted question - I assumed we had
>>>>>>> to
>>>>>>> register disks first. So maybe I need to troubleshoot why I could
>>>>>>> not import
>>>>>>> VMs from the domain first.
>>>>>>> It fails with an error "Image does not exist". Where does it look for
>>>>>>> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>>>>>>>
>>>>>>>
>>>>>>> In engine.log
>>>>>>>
>>>>>>> 2016-05-02 04:15:14,812 ERROR
>>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
>>>>>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
>>>>>>> sBroker::getImageInfo::Failed getting image info
>>>>>>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
>>>>>>> domainName='sahinasl
>>>>>>> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
>>>>>>> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
>>>>>>> 5a2-4d77-8091-d2fca3bbea1c',)
>>>>>>> 2016-05-02 04:15:14,814 WARN
>>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>>>>>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>>>>>> executeIrsBrokerCo

Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Maor Lipchuk
On Mon, May 2, 2016 at 5:45 PM, Sahina Bose <sab...@redhat.com> wrote:
>
>
>
> On 05/02/2016 05:57 PM, Maor Lipchuk wrote:
>
>
>
> On Mon, May 2, 2016 at 1:08 PM, Sahina Bose <sab...@redhat.com> wrote:
>>
>>
>>
>> On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>>
>>
>>
>> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <sab...@redhat.com> wrote:
>>>
>>>
>>>
>>> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>>>
>>> Hi Sahina,
>>>
>>> The disks with snapshots should be part of the VMs, once you will register 
>>> those VMs you should see those disks in the disks sub tab.
>>>
>>>
>>> Maor,
>>>
>>> I was unable to import VM which prompted question - I assumed we had to 
>>> register disks first. So maybe I need to troubleshoot why I could not 
>>> import VMs from the domain first.
>>> It fails with an error "Image does not exist". Where does it look for 
>>> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>>>
>>>
>>> In engine.log
>>>
>>> 2016-05-02 04:15:14,812 ERROR 
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
>>> sBroker::getImageInfo::Failed getting image info 
>>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on 
>>> domainName='sahinasl
>>> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code: 
>>> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
>>> 5a2-4d77-8091-d2fca3bbea1c',)
>>> 2016-05-02 04:15:14,814 WARN  
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>> executeIrsBrokerCommand: getImageInfo on 
>>> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image 
>>> doesn't exist: IRS
>>> GenericException: IRSErrorException: VolumeDoesNotExist
>>> 2016-05-02 04:15:14,814 INFO  
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
>>> 2016-05-02 04:15:14,814 WARN  
>>> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand] 
>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action 
>>> 'ImportVmFromConfiguration' failed for user admin@internal. Reasons: 
>>> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>>>
>>>
>>>
>>> jsonrpc.Executor/2::DEBUG::2016-05-02 
>>> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 
>>> 'Volume.getInfo' in
>>> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2', 
>>> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
>>> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID': 
>>> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>>>
>>> jsonrpc.Executor/2::DEBUG::2016-05-02 
>>> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath) 
>>> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
>>> jsonrpc.Executor/2::ERROR::2016-05-02 
>>> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError) 
>>> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
>>> return fn(*args, **kargs)
>>>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>>> res = f(*args, **kwargs)
>>>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
>>> volUUID=volUUID).getInfo()
>>>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
>>> volUUID)
>>>   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
>>> volUUID)
>>>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
>>> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>>>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
>>> self.validate()
>>>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
>>> self.validateVolumePath()
>>>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in 
>>> validateVolumePath
>>

Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Maor Lipchuk
On Mon, May 2, 2016 at 1:08 PM, Sahina Bose <sab...@redhat.com> wrote:

>
>
> On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>
>
>
> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <sab...@redhat.com> wrote:
>
>>
>>
>> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>>
>> Hi Sahina,
>>
>> The disks with snapshots should be part of the VMs, once you will
>> register those VMs you should see those disks in the disks sub tab.
>>
>>
>> Maor,
>>
>> I was unable to import VM which prompted question - I assumed we had to
>> register disks first. So maybe I need to troubleshoot why I could not
>> import VMs from the domain first.
>> It fails with an error "Image does not exist". Where does it look for
>> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>>
>
>> In engine.log
>>
>> 2016-05-02 04:15:14,812 ERROR
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
>> sBroker::getImageInfo::Failed getting image info
>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
>> domainName='sahinasl
>> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
>> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
>> 5a2-4d77-8091-d2fca3bbea1c',)
>> 2016-05-02 04:15:14,814 WARN
>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>> executeIrsBrokerCommand: getImageInfo on
>> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image
>> doesn't exist: IRS
>> GenericException: IRSErrorException: VolumeDoesNotExist
>> 2016-05-02 04:15:14,814 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
>> 2016-05-02 04:15:14,814 WARN
>> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
>> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
>> 'ImportVmFromConfiguration' failed for user admin@internal. Reasons:
>> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>>
>>
>>
>> jsonrpc.Executor/2::DEBUG::2016-05-02
>> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
>> 'Volume.getInfo' in
>> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
>> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
>> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID':
>> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>>
>> jsonrpc.Executor/2::DEBUG::2016-05-02
>> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
>> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
>> jsonrpc.Executor/2::ERROR::2016-05-02
>> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
>> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
>> return fn(*args, **kargs)
>>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>> res = f(*args, **kwargs)
>>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
>> volUUID=volUUID).getInfo()
>>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
>> volUUID)
>>   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
>> volUUID)
>>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
>> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
>> self.validate()
>>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
>> self.validateVolumePath()
>>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
>> validateVolumePath
>> raise se.VolumeDoesNotExist(self.volUUID)
>> VolumeDoesNotExist: Volume does not exist:
>> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>>
>> When I look at the tree output - there's no
>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>>
>>
>> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
>> │   │   │   ├── 766a15b9-57db

Re: [ovirt-users] Import storage domain - disks not listed

2016-05-02 Thread Maor Lipchuk
On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <sab...@redhat.com> wrote:

>
>
> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>
> Hi Sahina,
>
> The disks with snapshots should be part of the VMs, once you will register
> those VMs you should see those disks in the disks sub tab.
>
>
> Maor,
>
> I was unable to import VM which prompted question - I assumed we had to
> register disks first. So maybe I need to troubleshoot why I could not
> import VMs from the domain first.
> It fails with an error "Image does not exist". Where does it look for
> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>

> In engine.log
>
> 2016-05-02 04:15:14,812 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
> sBroker::getImageInfo::Failed getting image info
> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
> domainName='sahinasl
> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
> 5a2-4d77-8091-d2fca3bbea1c',)
> 2016-05-02 04:15:14,814 WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c]
> executeIrsBrokerCommand: getImageInfo on
> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image
> doesn't exist: IRS
> GenericException: IRSErrorException: VolumeDoesNotExist
> 2016-05-02 04:15:14,814 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c]
> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
> 2016-05-02 04:15:14,814 WARN
> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
> 'ImportVmFromConfiguration' failed for user admin@internal. Reasons:
> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>
>
>
> jsonrpc.Executor/2::DEBUG::2016-05-02
> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'Volume.getInfo' in
> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID':
> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>
> jsonrpc.Executor/2::DEBUG::2016-05-02
> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
> jsonrpc.Executor/2::ERROR::2016-05-02
> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
> volUUID=volUUID).getInfo()
>   File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
> volUUID)
>   File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in __init__
> volUUID)
>   File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>   File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
> self.validate()
>   File "/usr/share/vdsm/storage/volume.py", line 194, in validate
> self.validateVolumePath()
>   File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
> validateVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>
> When I look at the tree output - there's no
> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>
>
> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>


Usually the "image does not exists" message is prompted once the VM's disk
is managed in a different storage domain which were not imported yet.

Few questions:
1. Were there any other Storage 

Re: [ovirt-users] Import storage domain - disks not listed

2016-04-30 Thread Maor Lipchuk
Hi Sahina,

The disks with snapshots should be part of the VMs, once you will register
those VMs you should see those disks in the disks sub tab.

Regarding floating disks (without snapshots), you can register them through
REST.
If you are working on the master branch there should be a sub tab dedicated
for those also.

Regards,
Maor

On Tue, Apr 26, 2016 at 1:44 PM, Sahina Bose  wrote:

> Hi all,
>
> I have a gluster volume used as data storage domain which is replicated to
> a slave gluster volume (say, slavevol) using gluster's geo-replication
> feature.
>
> Now, in a new oVirt instance, I use the import storage domain to import
> the slave gluster volume. The "VM Import" tab correctly lists the VMs that
> were present in my original gluster volume. However the "Disks" tab is
> empty.
>
> GET
> https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered
> -->
> 
>
>
> In the code GetUnregisteredDiskQuery - if volumesList.size() != 1 - the
> image is skipped with a comment that we can't deal with snapshots.
>
> How do I recover the disks/images in this case?
>
>
> Further info:
>
> /rhev/data-center/mnt/glusterSD/10.70.40.112:_slavevol
> ├── 5e1a37cf-933d-424c-8e3d-eb9e40b690a7
> │   ├── dom_md
> │   │   ├── ids
> │   │   ├── inbox
> │   │   ├── leases
> │   │   ├── metadata
> │   │   └── outbox
> │   ├── images
> │   │   ├── 202efaa6-0d01-40f3-a541-10eee920d221
> │   │   │   ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1
> │   │   │   ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1.lease
> │   │   │   └── eb701046-6ee1-4c9d-b097-e51a8fd283e1.meta
> │   │   ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
> │   │   ├── c75de5b7-aa88-48d7-ba1b-067181eac6ae
> │   │   │   ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9
> │   │   │   ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9.lease
> │   │   │   └── ff09e16a-e8a0-452b-b95c-e160e68d09a9.meta
> │   │   ├── efa94a0d-c08e-4ad9-983b-4d1d76bca865
> │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7
> │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7.lease
> │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7.meta
> │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4
> │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4.lease
> │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4.meta
> │   │   │   ├── e79a8821-bb4a-436a-902d-3876f107dd99
> │   │   │   ├── e79a8821-bb4a-436a-902d-3876f107dd99.lease
> │   │   │   └── e79a8821-bb4a-436a-902d-3876f107dd99.meta
> │   │   └── f5eacc6e-4f16-4aa5-99ad-53ac1cda75b7
> │   │   ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d
> │   │   ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d.lease
> │   │   └── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d.meta
> │   └── master
> │   ├── tasks
> │   └── vms
> └── __DIRECT_IO_TEST__
>
> engine.log:
> 2016-04-26 06:37:57,715 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
> GetImageInfoVDSCommand, return: org.ov
> irt.engine.core.common.businessentities.storage.DiskImage@d4b3ac2f, log
> id: 7b693bad
> 2016-04-26 06:37:57,724 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] START,
> GetVolumesListVDSCommand( StoragePool
> DomainAndGroupIdBaseVDSCommandParameters:{runAsync='true',
> storagePoolId='ed338557-5995-4634-97e2-15454a9d8800',
> ignoreFailoverLimit='false',
> storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b
> 690a7', imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2'}), log id:
> 741b9214
> 2016-04-26 06:37:58,748 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
> GetVolumesListVDSCommand, return: [9
> 0f1e26a-00e9-4ea5-9e92-2e448b9b8bfa, 766a15b9-57db-417d-bfa0-beadbbb84ad2,
> 34e46104-8fad-4510-a5bf-0730b97a6659], log id: 741b9214
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-21 Thread Maor Lipchuk
>From the logs (see [1]) it looks like you encountered the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1303316

Simone, can u confirm it is the same case mentioned in the bug, is there
any workaround you can suggest not getting those errors any more (Maybe
move the Host to Maintenance-Mode and restart of the hypervisor as
suggested in the bug or  upgrade VDSM ?)

[1]
MainThread::INFO::2016-04-20
17:14:02,500::hosted_engine::688::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-04-20
17:14:02,500::config::205::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::WARNING::2016-04-20
17:14:02,807::ovf_store::104::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Unable to find OVF_STORE
MainThread::ERROR::2016-04-20
17:14:02,873::config::234::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf

Regards,
Maor

On Wed, Apr 20, 2016 at 6:21 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
wrote:

> The logs are not from the machine where the hosted engine is running on,
> but from the SPM.
>
> Op wo 20 apr. 2016 om 17:19 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> Hereby the logs.
>>
>>
>> Op wo 20 apr. 2016 om 17:11 schreef Maor Lipchuk <mlipc...@redhat.com>:
>>
>>> Hi Paul,
>>>
>>> Can u please attach the engine and VDSM logs with those failures to
>>> check the origin of those failures
>>>
>>> Thanks,
>>> Maor
>>>
>>> On Wed, Apr 20, 2016 at 6:06 PM, Paul Groeneweg | Pazion <p...@pazion.nl
>>> > wrote:
>>>
>>>> Looks like the system does try recreate the OVF :-)
>>>> Too bad this failed again...
>>>>
>>>> http://screencast.com/t/RlYCR1rk8T
>>>> http://screencast.com/t/CpcQuoKg
>>>>
>>>> Failed to create OVF store disk for Storage Domain hostedengine_nfs.
>>>> The Disk with the id b6f34661-8701-4f82-a07c-ed7faab4a1b8 might be
>>>> removed manually for automatic attempt to create new one.
>>>> OVF updates won't be attempted on the created disk.
>>>>
>>>> And on the hosted storage disk tab :
>>>> http://screencast.com/t/ZmwjsGoQ1Xbp
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Op wo 20 apr. 2016 om 09:17 schreef Paul Groeneweg | Pazion <
>>>> p...@pazion.nl>:
>>>>
>>>>> I have added a ticket:
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1328718
>>>>>
>>>>> Looking forward to solve!  ( trying to providing as much info as
>>>>> required ).
>>>>>
>>>>> For the short term, wwhat do I need to restore/rollback to get the
>>>>> OVF_STORE back in the Web GUI? is this all db?
>>>>>
>>>>>
>>>>>
>>>>> Op wo 20 apr. 2016 om 09:04 schreef Paul Groeneweg | Pazion <
>>>>> p...@pazion.nl>:
>>>>>
>>>>>> Yes I removed them also from the web interface.
>>>>>> Cen I recreate these or how can I restore?
>>>>>>
>>>>>> Op wo 20 apr. 2016 om 09:01 schreef Roy Golan <rgo...@redhat.com>:
>>>>>>
>>>>>>> On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion <
>>>>>>> p...@pazion.nl> wrote:
>>>>>>>
>>>>>>>> Hi Roy,
>>>>>>>>
>>>>>>>> What do you mean with a RFE , submit a bug ticket?
>>>>>>>>
>>>>>>>> Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=
>>>>>>>
>>>>>>> *oVirt*
>>>>>>>
>>>>>>>
>>>>>>>> Here is what I did:
>>>>>>>>
>>>>>>>> I removed the OVF disks as explained from the hosted engine/storage.
>>>>>>>> I started another server, tried several things like putting to
>>>>>>>> maintenance and reinstalling, but I keep getting:
>>>>>>>>
>>>>>>>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>>>>>>>> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to 
&g

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-03-31 Thread Maor Lipchuk
[Adding Roy to the thread]
Roy,

Can you please share your insight regarding the hosted engine behavior.
It looks that one of the OVF_STORE disks is not valid and I think that
detach/attach of the storage domain might fix the audit log errors.
The question is, if it is possible to do so in hosted engine environment.

Regards,
Maor

On Thu, Mar 31, 2016 at 4:14 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
wrote:

>
>
> This storage domain is my hosted engine storage domain. So I should put it
> to maintenance and then detach? http://screencast.com/t/kjgNpI7fQ
>
> Am I still able to use the hosed engine ( web interface) when this stoarge
> domain is in maintenance and detached?
>
> As I don't want to risk detaching hosted storage storage domain and as a
> results breaking my whole setup.
>
>
>
>
>
> Op do 31 mrt. 2016 om 15:07 schreef Maor Lipchuk <mlipc...@redhat.com>:
>
>> Have you already tried to detach and attach the Storage Domain?
>>
>> On Thu, Mar 31, 2016 at 3:11 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
>> wrote:
>>
>>> Hi Maor,
>>>
>>> I am refering to the eventlog, where these ovf errors appear every hour
>>> and fill up my eventlog
>>>
>>> http://screencast.com/t/S8cfXMsdGM
>>>
>>>
>>>
>>> Op do 31 mrt. 2016 om 14:07 schreef Maor Lipchuk <mlipc...@redhat.com>:
>>>
>>>> Hi Paul,
>>>>
>>>> Which problem are you referring, the remove of OVF_STORE disks or the
>>>> audit log warning?
>>>> In the screencast I can see that the Storage Domain is active but I
>>>> didn't notice any audit log errors.
>>>>
>>>> Regards,
>>>> Maor
>>>>
>>>> On Thu, Mar 31, 2016 at 2:38 PM, Paul Groeneweg | Pazion <
>>>> p...@pazion.nl> wrote:
>>>>
>>>>> Hi Maor,
>>>>>
>>>>> The 3.6.4 did not solve the problem.
>>>>>
>>>>> Any idea how to fix this issue?
>>>>> I believe it has something todo with the status of hosted_storage (
>>>>> 1st entry )  http://screencast.com/t/vCx0CQiXm
>>>>>
>>>>> Op za 26 mrt. 2016 om 18:08 schreef Maor Lipchuk <mlipc...@redhat.com
>>>>> >:
>>>>>
>>>>>> Hi Paul,
>>>>>>
>>>>>> Can you please update whether the upgrade for 3.6.4 has helped.
>>>>>> Regarding the OVF_STORE disks, those disks should not be deleted
>>>>>> since deleting them might reflect on the Disaster Recovery scenarios
>>>>>>
>>>>>> Regards,
>>>>>> Maor
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 24, 2016 at 10:10 PM, Paul Groeneweg | Pazion <
>>>>>> p...@pazion.nl> wrote:
>>>>>>
>>>>>>> I believe my problem is related to this bug
>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1303316
>>>>>>>
>>>>>>> As you can see in the screenshot the hostedengine storage is
>>>>>>> unassigned and so both ovf_stores are OK, but not linked and therefore
>>>>>>>  can't be updated?!
>>>>>>>
>>>>>>> So for now I guess I'll wait for update 3.6.4 and cross my fingers
>>>>>>> and updates solves the event log error.
>>>>>>>
>>>>>>> Op do 24 mrt. 2016 om 20:15 schreef Paul Groeneweg | Pazion <
>>>>>>> p...@pazion.nl>:
>>>>>>>
>>>>>>>> I checked, the OVf, but I can only remove the OVF.
>>>>>>>>
>>>>>>>> http://screencast.com/t/vCx0CQiXm
>>>>>>>>
>>>>>>>> What happens when I remove them, is it safe?
>>>>>>>>
>>>>>>>> I checked agent.log and do not see the errors there
>>>>>>>>
>>>>>>>> MainThread::INFO::2016-03-24
>>>>>>>> 20:12:28,154::image::116::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
>>>>>>>> Preparing images
>>>>>>>>
>>>>>>>> MainThread::INFO::2016-03-24
>>>>>>>> 20:12:28,811::hosted_engine::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
>>>>>>>> Reloading vm.conf from the shared storage domain
>&g

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-03-31 Thread Maor Lipchuk
Have you already tried to detach and attach the Storage Domain?

On Thu, Mar 31, 2016 at 3:11 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
wrote:

> Hi Maor,
>
> I am refering to the eventlog, where these ovf errors appear every hour
> and fill up my eventlog
>
> http://screencast.com/t/S8cfXMsdGM
>
>
>
> Op do 31 mrt. 2016 om 14:07 schreef Maor Lipchuk <mlipc...@redhat.com>:
>
>> Hi Paul,
>>
>> Which problem are you referring, the remove of OVF_STORE disks or the
>> audit log warning?
>> In the screencast I can see that the Storage Domain is active but I
>> didn't notice any audit log errors.
>>
>> Regards,
>> Maor
>>
>> On Thu, Mar 31, 2016 at 2:38 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
>> wrote:
>>
>>> Hi Maor,
>>>
>>> The 3.6.4 did not solve the problem.
>>>
>>> Any idea how to fix this issue?
>>> I believe it has something todo with the status of hosted_storage ( 1st
>>> entry )  http://screencast.com/t/vCx0CQiXm
>>>
>>> Op za 26 mrt. 2016 om 18:08 schreef Maor Lipchuk <mlipc...@redhat.com>:
>>>
>>>> Hi Paul,
>>>>
>>>> Can you please update whether the upgrade for 3.6.4 has helped.
>>>> Regarding the OVF_STORE disks, those disks should not be deleted since
>>>> deleting them might reflect on the Disaster Recovery scenarios
>>>>
>>>> Regards,
>>>> Maor
>>>>
>>>>
>>>> On Thu, Mar 24, 2016 at 10:10 PM, Paul Groeneweg | Pazion <
>>>> p...@pazion.nl> wrote:
>>>>
>>>>> I believe my problem is related to this bug
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1303316
>>>>>
>>>>> As you can see in the screenshot the hostedengine storage is
>>>>> unassigned and so both ovf_stores are OK, but not linked and therefore
>>>>>  can't be updated?!
>>>>>
>>>>> So for now I guess I'll wait for update 3.6.4 and cross my fingers and
>>>>> updates solves the event log error.
>>>>>
>>>>> Op do 24 mrt. 2016 om 20:15 schreef Paul Groeneweg | Pazion <
>>>>> p...@pazion.nl>:
>>>>>
>>>>>> I checked, the OVf, but I can only remove the OVF.
>>>>>>
>>>>>> http://screencast.com/t/vCx0CQiXm
>>>>>>
>>>>>> What happens when I remove them, is it safe?
>>>>>>
>>>>>> I checked agent.log and do not see the errors there
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:28,154::image::116::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
>>>>>> Preparing images
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:28,811::hosted_engine::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
>>>>>> Reloading vm.conf from the shared storage domain
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:28,811::config::205::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>>>>>> Trying to get a fresher copy of vm configuration from the OVF_STORE
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:28,936::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>>>>>> Found OVF_STORE: imgUUID:18c50ea6-4654-4525-b241-09e15acf5e99,
>>>>>> volUUID:2f2ccb59-a3f3-43bf-87eb-53595af01cf5
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:29,147::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>>>>>> Found OVF_STORE: imgUUID:6e14348b-af7a-49bc-9af2-8b703c17a53d,
>>>>>> volUUID:fabdd6f4-b8d6-4ffe-889c-df86b34619ca
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:29,420::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>>>>>> Extracting Engine VM OVF from the OVF_STORE
>>>>>>
>>>>>> MainThread::INFO::2016-03-24
>>>>>> 20:12:29,580::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>>>>>> OVF_STORE volume path: /rhev/data-center/mnt/hostedstorage.pazion.nl:
>>>>>> _opt_hosted-

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-03-31 Thread Maor Lipchuk
Hi Paul,

Which problem are you referring, the remove of OVF_STORE disks or the audit
log warning?
In the screencast I can see that the Storage Domain is active but I didn't
notice any audit log errors.

Regards,
Maor

On Thu, Mar 31, 2016 at 2:38 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
wrote:

> Hi Maor,
>
> The 3.6.4 did not solve the problem.
>
> Any idea how to fix this issue?
> I believe it has something todo with the status of hosted_storage ( 1st
> entry )  http://screencast.com/t/vCx0CQiXm
>
> Op za 26 mrt. 2016 om 18:08 schreef Maor Lipchuk <mlipc...@redhat.com>:
>
>> Hi Paul,
>>
>> Can you please update whether the upgrade for 3.6.4 has helped.
>> Regarding the OVF_STORE disks, those disks should not be deleted since
>> deleting them might reflect on the Disaster Recovery scenarios
>>
>> Regards,
>> Maor
>>
>>
>> On Thu, Mar 24, 2016 at 10:10 PM, Paul Groeneweg | Pazion <p...@pazion.nl
>> > wrote:
>>
>>> I believe my problem is related to this bug
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1303316
>>>
>>> As you can see in the screenshot the hostedengine storage is unassigned
>>> and so both ovf_stores are OK, but not linked and therefore  can't be
>>> updated?!
>>>
>>> So for now I guess I'll wait for update 3.6.4 and cross my fingers and
>>> updates solves the event log error.
>>>
>>> Op do 24 mrt. 2016 om 20:15 schreef Paul Groeneweg | Pazion <
>>> p...@pazion.nl>:
>>>
>>>> I checked, the OVf, but I can only remove the OVF.
>>>>
>>>> http://screencast.com/t/vCx0CQiXm
>>>>
>>>> What happens when I remove them, is it safe?
>>>>
>>>> I checked agent.log and do not see the errors there
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:28,154::image::116::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
>>>> Preparing images
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:28,811::hosted_engine::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
>>>> Reloading vm.conf from the shared storage domain
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:28,811::config::205::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>>>> Trying to get a fresher copy of vm configuration from the OVF_STORE
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:28,936::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>>>> Found OVF_STORE: imgUUID:18c50ea6-4654-4525-b241-09e15acf5e99,
>>>> volUUID:2f2ccb59-a3f3-43bf-87eb-53595af01cf5
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:29,147::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>>>> Found OVF_STORE: imgUUID:6e14348b-af7a-49bc-9af2-8b703c17a53d,
>>>> volUUID:fabdd6f4-b8d6-4ffe-889c-df86b34619ca
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:29,420::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>>>> Extracting Engine VM OVF from the OVF_STORE
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:29,580::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>>>> OVF_STORE volume path: /rhev/data-center/mnt/hostedstorage.pazion.nl:
>>>> _opt_hosted-engine/88b69eba-ef4f-4dbe-ba53-20dadd424d0e/images/6e14348b-af7a-49bc-9af2-8b703c17a53d/fabdd6f4-b8d6-4ffe-889c-df86b34619ca
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:29,861::config::225::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>>>> Found an OVF for HE VM, trying to convert
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:29,865::config::230::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>>>> Got vm.conf from OVF_STORE
>>>>
>>>> MainThread::INFO::2016-03-24
>>>> 20:12:29,997::hosted_engine::462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>>>> Current state EngineUp (score: 3400)
>>>>
>>>>
>>>> So leaves me wondering if I should worry about the errors in the event
>>>> log.
>>>>
>>>>
>>>>
>>>> Op do 24 mrt. 2016 om 16:18 schreef Paul Groenewe

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-03-26 Thread Maor Lipchuk
Hi Paul,

Can you please update whether the upgrade for 3.6.4 has helped.
Regarding the OVF_STORE disks, those disks should not be deleted since
deleting them might reflect on the Disaster Recovery scenarios

Regards,
Maor

On Thu, Mar 24, 2016 at 10:10 PM, Paul Groeneweg | Pazion <p...@pazion.nl>
wrote:

> I believe my problem is related to this bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1303316
>
> As you can see in the screenshot the hostedengine storage is unassigned
> and so both ovf_stores are OK, but not linked and therefore  can't be
> updated?!
>
> So for now I guess I'll wait for update 3.6.4 and cross my fingers and
> updates solves the event log error.
>
> Op do 24 mrt. 2016 om 20:15 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> I checked, the OVf, but I can only remove the OVF.
>>
>> http://screencast.com/t/vCx0CQiXm
>>
>> What happens when I remove them, is it safe?
>>
>> I checked agent.log and do not see the errors there
>>
>> MainThread::INFO::2016-03-24
>> 20:12:28,154::image::116::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
>> Preparing images
>>
>> MainThread::INFO::2016-03-24
>> 20:12:28,811::hosted_engine::684::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
>> Reloading vm.conf from the shared storage domain
>>
>> MainThread::INFO::2016-03-24
>> 20:12:28,811::config::205::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>> Trying to get a fresher copy of vm configuration from the OVF_STORE
>>
>> MainThread::INFO::2016-03-24
>> 20:12:28,936::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>> Found OVF_STORE: imgUUID:18c50ea6-4654-4525-b241-09e15acf5e99,
>> volUUID:2f2ccb59-a3f3-43bf-87eb-53595af01cf5
>>
>> MainThread::INFO::2016-03-24
>> 20:12:29,147::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>> Found OVF_STORE: imgUUID:6e14348b-af7a-49bc-9af2-8b703c17a53d,
>> volUUID:fabdd6f4-b8d6-4ffe-889c-df86b34619ca
>>
>> MainThread::INFO::2016-03-24
>> 20:12:29,420::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> Extracting Engine VM OVF from the OVF_STORE
>>
>> MainThread::INFO::2016-03-24
>> 20:12:29,580::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> OVF_STORE volume path: /rhev/data-center/mnt/hostedstorage.pazion.nl:
>> _opt_hosted-engine/88b69eba-ef4f-4dbe-ba53-20dadd424d0e/images/6e14348b-af7a-49bc-9af2-8b703c17a53d/fabdd6f4-b8d6-4ffe-889c-df86b34619ca
>>
>> MainThread::INFO::2016-03-24
>> 20:12:29,861::config::225::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>> Found an OVF for HE VM, trying to convert
>>
>> MainThread::INFO::2016-03-24
>> 20:12:29,865::config::230::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>> Got vm.conf from OVF_STORE
>>
>> MainThread::INFO::2016-03-24
>> 20:12:29,997::hosted_engine::462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Current state EngineUp (score: 3400)
>>
>>
>> So leaves me wondering if I should worry about the errors in the event
>> log.
>>
>>
>>
>> Op do 24 mrt. 2016 om 16:18 schreef Paul Groeneweg | Pazion <
>> p...@pazion.nl>:
>>
>>>
>>> These OVF stores are created on my hosted-engine storage instance. I did
>>> not found any reference in the hosted-engine.conf, so you are sure they
>>> can't be deleted?
>>>
>>> So it holds only info about the hosted-engine disk? So when detaching,
>>> do I have any risk destroying my hosted-engine?
>>>
>>> I can just detach them in this screen:
>>> http://screencast.com/t/ymnzsNHj7e and then re-attach?
>>>
>>> I check file permissions, but this looked good compared to the other
>>> images. So really strange this eventlog.
>>>
>>> Regards,
>>> Paul
>>>
>>>
>>> Op do 24 mrt. 2016 om 10:01 schreef Maor Lipchuk <mlipc...@redhat.com>:
>>>
>>>> Met vriendelijke groeten,
>>>>
>>>> Paul Groeneweg
>>>> Pazion
>>>> Webdevelopment  -  Hosting  -  Apps
>>>>
>>>> T +31 26 3020038
>>>> M +31 614 277 577
>>>> E  p...@pazion.nl
>>>>
>>>>  ***disclaimer***
>>>> "This e-mail and any attachments thereto may contain in

Re: [ovirt-users] delete storage definition

2016-03-24 Thread Maor Lipchuk
On Thu, Mar 24, 2016 at 12:20 PM, p...@email.cz  wrote:

> Hi folks,
> how can I delete the last storage  definition from oVirt database if the
> last  volume has been deleted from bricks commandline ( rm -rf  < path to
> that volume > ) directly ?
> In oVirt DB exists this storage last record and blocking create new
> storage operation ( ovirt offering " delete datacenter", but this is not
> the right way for me, now )
> regs. Pavel
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
Hi Pavel,

What are your plans regarding that Data Center?
In case you want to keep that Data Center to be used with other storage
domains you can try to add a new storage domain without attaching it to any
Data Center and try to re-initialize the Data Center with this new storage
domain. Once that Data Center will be re-initialized you can try to remove
the old Storage Domain (or force remove it you encounter any problem)
Please let me know if this helps you or is there anything else that you
were trying to do

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-03-24 Thread Maor Lipchuk
On Thu, Mar 24, 2016 at 12:12 AM, Paul Groeneweg | Pazion 
wrote:

>
> After the 3.6 updates ( which didn't went without a hitch )
>
> I get the following errors in my event log:
>
> Failed to update OVF disks 18c50ea6-4654-4525-b241-09e15acf5e99, OVF data
> isn't updated on those OVF stores (Data Center Default, Storage Domain
> hostedengine_nfs).
>
> VDSM command failed: Could not acquire resource. Probably resource factory
> threw an exception.: ()
>
> http://screencast.com/t/S8cfXMsdGM
>
> When I check on file there is some data, but not updated:
> http://screencast.com/t/hbXQFlou
>
> When I check in the web interface I see 2 OVF files listed. What are these
> for, can I delete them? http://screencast.com/t/ymnzsNHj7e
>

> Hopefully someone knows what to do about these warnings/erros and whether
> I can delete the OVF files.
>

> Best Regards,
> Paul Groeneweg
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
Hi Paul,

The OVF_STORE disks are disks which preserve all the VMs and Templates OVF
data and are mostly use for disaster recovery scenarios.
Those disks can not be deleted.
Regarding the audit log which you got, can you try to detach and attach the
Storage once again and let me know if you still get this even log.

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] User with SuperAdmin Role has not MANIPULATE_STORAGE_DOMAIN

2016-01-12 Thread Maor Lipchuk




- Original Message -
> From: "Kevin C" <ke...@famillecousin.fr>
> To: "Maor Lipchuk" <mlipc...@redhat.com>
> Cc: "users" <users@ovirt.org>, "Oved Ourfali" <oourf...@redhat.com>
> Sent: Monday, January 11, 2016 11:04:11 AM
> Subject: Re: [ovirt-users] User with SuperAdmin Role has not 
> MANIPULATE_STORAGE_DOMAIN
> 
> 
> 
> Le 09/01/2016 16:09, Maor Lipchuk a écrit :
> > Hi Kevin,
> >
> > Does it still reproduce after the permissions were set?
> >
> > Regards,
> > Maor
> >
> Hi Maor,
> 
> Yes it does, I just try it with another Domain.
> 
> Regards


Which role have you added to your user? Can u please try to edit the role which 
you have added to your user, does the role "Configure Storage Domain" is marked 
(See attached screenshot).
Can you please try to add to the user the role StorageAdmin (See second 
attached screenshot)

Regards,
Maor

> 
> ---
> 
> Kevin C
> 
> 
> > - Original Message -
> >> From: "Oved Ourfali" <oourf...@redhat.com>
> >> To: "Kevin C" <ke...@famillecousin.fr>
> >> Cc: "users" <users@ovirt.org>
> >> Sent: Friday, January 8, 2016 1:20:53 PM
> >> Subject: Re: [ovirt-users] User with SuperAdmin Role has not
> >>MANIPULATE_STORAGE_DOMAIN
> >>
> >>
> >>
> >> CC-ing someone from the storage team to take a look.
> >> On Jan 7, 2016 6:43 PM, "Kevin C" < ke...@famillecousin.fr > wrote:
> >>
> >>
> >>
> >> Hi,
> >>
> >> I set it on "system" level, on right upper side.
> >>
> >> Regards,
> >>
> >> Le 07/01/2016 17:39, Oved Ourfali a écrit :
> >>
> >>
> >>
> >>
> >> Permissions in ovirt are composed of the role, user/group, and object.
> >>
> >> I guess you refer to the SuperUser role. Question is what object you've
> >> granted it on.
> >>
> >> In order to have a permission on "system" level, you gave to go to the
> >> configure dialog (see right upper side of your screen).
> >>
> >> Regards,
> >> Oved Ourfali
> >> Hi list,
> >>
> >> I set the SuperAdmin Role on a AD group. I use my account in this group to
> >> use oVirt. I try today to add an Export Domain but I failed with this
> >> error
> >> in log :
> >>
> >> 2016-01-07 16:46:28,883 INFO
> >> [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
> >> (default task-1) [68d5410a] No permission found for user
> >> '8ac67747-110c-4125-86f1-1f52ca0e7705' or one of the groups he is member
> >> of,
> >> when running action 'AttachStorageDomainToPool', Required permissions are:
> >> Action type: 'ADMIN' Action group: 'MANIPULATE_STORAGE_DOMAIN' Object
> >> type:
> >> 'Storage' Object ID: 'c7dee64d-a27e-446e-8656-cef2d8ea42a6'.
> >>
> >>
> >> Where can I set the good permission ?
> >>
> >> Thanks a lot
> >> ---
> >> Kevin C
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >>
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> 
>___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] User with SuperAdmin Role has not MANIPULATE_STORAGE_DOMAIN

2016-01-12 Thread Maor Lipchuk

- Original Message -
> From: "Kevin COUSIN" <ke...@famillecousin.fr>
> To: "Maor Lipchuk" <mlipc...@redhat.com>
> Cc: "users" <users@ovirt.org>, "Oved Ourfali" <oourf...@redhat.com>
> Sent: Tuesday, January 12, 2016 5:06:22 PM
> Subject: Re: [ovirt-users] User with SuperAdmin Role has not 
> MANIPULATE_STORAGE_DOMAIN
> 
> I set SuperAdmin Role on a group.
> It dosen't work with StorageAdmin role.
> I can't add set roles with my directory account, I need to use admin@internal
> account.


Which DC are you trying to attach the Storage Domain?
From the attached print screens it looks like the DC you have permissions on 
are infra and local.
Also, Which oVirt version are you using?
If it is possible can you please send print screens with the permissions of the 
user and the permissions on the Data Center?

Thanks,
Maor


> 
> 
> 
>COUSIN Kevin
> 
> - Mail original -
> > De: "Maor Lipchuk" <mlipc...@redhat.com>
> > À: "Kevin C" <ke...@famillecousin.fr>
> > Cc: "users" <users@ovirt.org>, "Oved Ourfali" <oourf...@redhat.com>
> > Envoyé: Mardi 12 Janvier 2016 13:57:16
> > Objet: Re: [ovirt-users] User with SuperAdmin Role has not
> > MANIPULATE_STORAGE_DOMAIN
> 
> > - Original Message -
> >> From: "Kevin C" <ke...@famillecousin.fr>
> >> To: "Maor Lipchuk" <mlipc...@redhat.com>
> >> Cc: "users" <users@ovirt.org>, "Oved Ourfali" <oourf...@redhat.com>
> >> Sent: Monday, January 11, 2016 11:04:11 AM
> >> Subject: Re: [ovirt-users] User with SuperAdmin Role has not
> >> MANIPULATE_STORAGE_DOMAIN
> >> 
> >> 
> >> 
> >> Le 09/01/2016 16:09, Maor Lipchuk a écrit :
> >> > Hi Kevin,
> >> >
> >> > Does it still reproduce after the permissions were set?
> >> >
> >> > Regards,
> >> > Maor
> >> >
> >> Hi Maor,
> >> 
> >> Yes it does, I just try it with another Domain.
> >> 
> >> Regards
> > 
> > 
> > Which role have you added to your user? Can u please try to edit the role
> > which
> > you have added to your user, does the role "Configure Storage Domain" is
> > marked
> > (See attached screenshot).
> > Can you please try to add to the user the role StorageAdmin (See second
> > attached
> > screenshot)
> > 
> > Regards,
> > Maor
> > 
> >> 
> >> ---
> >> 
> >> Kevin C
> >> 
> >> 
> >> > - Original Message -
> >> >> From: "Oved Ourfali" <oourf...@redhat.com>
> >> >> To: "Kevin C" <ke...@famillecousin.fr>
> >> >> Cc: "users" <users@ovirt.org>
> >> >> Sent: Friday, January 8, 2016 1:20:53 PM
> >> >> Subject: Re: [ovirt-users] User with SuperAdmin Role has not
> >> >> MANIPULATE_STORAGE_DOMAIN
> >> >>
> >> >>
> >> >>
> >> >> CC-ing someone from the storage team to take a look.
> >> >> On Jan 7, 2016 6:43 PM, "Kevin C" < ke...@famillecousin.fr > wrote:
> >> >>
> >> >>
> >> >>
> >> >> Hi,
> >> >>
> >> >> I set it on "system" level, on right upper side.
> >> >>
> >> >> Regards,
> >> >>
> >> >> Le 07/01/2016 17:39, Oved Ourfali a écrit :
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> Permissions in ovirt are composed of the role, user/group, and object.
> >> >>
> >> >> I guess you refer to the SuperUser role. Question is what object you've
> >> >> granted it on.
> >> >>
> >> >> In order to have a permission on "system" level, you gave to go to the
> >> >> configure dialog (see right upper side of your screen).
> >> >>
> >> >> Regards,
> >> >> Oved Ourfali
> >> >> Hi list,
> >> >>
> >> >> I set the SuperAdmin Role on a AD group. I use my account in this group
> >> >> to
> >> >> use oVirt. I try today to add an Export Domain but I failed with this
> >> >> error
> >> >> in log :
> >> >>
> >> >> 2016-01-07 16:46:28,883 INFO
> >> >> [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
> >> >> (default task-1) [68d5410a] No permission found for user
> >> >> '8ac67747-110c-4125-86f1-1f52ca0e7705' or one of the groups he is
> >> >> member
> >> >> of,
> >> >> when running action 'AttachStorageDomainToPool', Required permissions
> >> >> are:
> >> >> Action type: 'ADMIN' Action group: 'MANIPULATE_STORAGE_DOMAIN' Object
> >> >> type:
> >> >> 'Storage' Object ID: 'c7dee64d-a27e-446e-8656-cef2d8ea42a6'.
> >> >>
> >> >>
> >> >> Where can I set the good permission ?
> >> >>
> >> >> Thanks a lot
> >> >> ---
> >> >> Kevin C
> >> >> ___
> >> >> Users mailing list
> >> >> Users@ovirt.org
> >> >> http://lists.ovirt.org/mailman/listinfo/users
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> ___
> >> >> Users mailing list
> >> >> Users@ovirt.org
> >> >> http://lists.ovirt.org/mailman/listinfo/users
> >> >>
> >> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO domain question

2016-01-10 Thread Maor Lipchuk



- Original Message -
> From: "Will Dennis" <wden...@nec-labs.com>
> To: "Yedidyah Bar David" <d...@redhat.com>
> Cc: "Maor Lipchuk" <mlipc...@redhat.com>, "users" <users@ovirt.org>
> Sent: Monday, January 11, 2016 2:48:35 AM
> Subject: Re: [ovirt-users] ISO domain question
> 
> Looks like a pretty old request, too… if I may ask, is this a complicated
> technical change, or is it just lower priority than other open issues? Seems
> like if it was an easy change to make, it would be a big win for users…

Indeed, It is on the road map also for converting the Export domain to a 
regular data storage domain.

> 
> On Jan 10, 2016, at 3:15 AM, Yedidyah Bar David
> <d...@redhat.com<mailto:d...@redhat.com>> wrote:
> 
> On Sun, Jan 10, 2016 at 5:09 AM, Will Dennis
> <wden...@nec-labs.com<mailto:wden...@nec-labs.com>> wrote:
> 
> Seems like there should be an easier way to consume an
> external existing repository.
> 
> Indeed, and we have an RFE for that:
> https://bugzilla.redhat.com/show_bug.cgi?id=1034112
> 
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   3   >