Re: [ovirt-users] Ovirt + Citrix Xenserver support?

2017-05-09 Thread santosh bhabal
Thanks for the update.

On Tue, May 9, 2017 at 4:11 PM, Nathanaël Blanchet <blanc...@abes.fr> wrote:

>
>
> Le 09/05/2017 à 12:29, santosh bhabal a écrit :
>
> Hello Experts,
>
> I am new to Ovirt community.
> Apology if this question has asked earlier.
> I just wanted to know that does Ovirt support Citrix Xenserver or not?*
>
> Definitively not, ovirt is a only a KVM-based hypervisor manager, even if
> it is based on libvirt API as Xen is.
> But importing xen vms into ovirt is a child's play.
>
> ... otherwise you can play with nested virtualization to get your Xen
> nodes as ovirt vms :)
>
>
> Reagrds
> Santosh.
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt + Citrix Xenserver support?

2017-05-09 Thread santosh bhabal
Hello Experts,

I am new to Ovirt community.
Apology if this question has asked earlier.
I just wanted to know that does Ovirt support Citrix Xenserver or not?

Reagrds
Santosh.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Restoring backed up VM

2014-10-07 Thread santosh

Allon, Elad:

Yes, RHEL_65_CL1_CLONE2 has the same ID as the VM being restored.
And error message also indicates that the conflict is occurring on the 
VM id.


Thanks for the confirmation.

Thanks,
Santosh

On 10/07/2014 03:59 AM, Maor Lipchuk wrote:

Elad, the the VM from configuration checks the existence of the VM id.
Probably RHEL_65_CL1_CLONE2 has the same id as the VM which is being imported.

Regards,
Maor


- Original Message -

From: Elad Ben Aharon ebena...@redhat.com
To: santosh sba...@commvault.com, Allon Mureinik amure...@redhat.com
Cc: users@ovirt.org
Sent: Tuesday, October 7, 2014 9:08:42 AM
Subject: Re: [ovirt-users] Restoring backed up VM

If you still have the backed up VM in your system, the create from
configuration operation should be blocked as you've encountered.
Allon, what does the create VM from configuration checks? The existence of
the VM name or the existence of its ID?

- Original Message -
From: santosh sba...@commvault.com
To: users@ovirt.org
Sent: Monday, 6 October, 2014 4:28:06 AM
Subject: [ovirt-users] Restoring backed up VM

Hi

I am trying to backup and restore the VM using the flow suggester at
Features/Backup-Restore API Integration

When I tried to restore the VM using the following steps,


Full Virtual Machine Restoration


 1. Create disks for restore
 2. Attach the disks for restore to the virtual appliance (Restore the
 data to it)
 3. Detach the disks from the virtual appliance.
 4. Create a vm using the configuration that was saved as part of the
 backup flow - (added capabillity to oVirt as part of the Backup API)
 5. Attach the restored disks to the created vm.

I encountered following error at step 4.



fault
reasonOperation Failed/reason
detail[Import VM failed - VM Id already exist in the system . Please remove
the VM (RHEL_65_CL1_CLONE2) from the system first]/detail
/fault

I have not deleted/removed the backed up VM from the system.

Is it expected behaviour?
Should it not overwrite the existing VM or Create new VM with different VM
Id, if the backup up VM exists?


Thanks, Santosh
***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or
distribution
by others is strictly prohibited. If you have received the message by
mistake,
please advise the sender by reply email and delete the message. Thank you.
**

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Restoring backed up VM

2014-10-05 Thread santosh

Hi

I am trying to backup and restore the VM using the flow suggester at 
Features/Backup-Restore API Integration 
http://www.ovirt.org/Features/Backup-Restore_API_Integration#VM_Backup.2FRestore_suggested_flows


When I tried to restore the VM using the following steps,


_*Full Virtual Machine Restoration*_

1. Create disks for restore
2. Attach the disks for restore to the virtual appliance (Restore the
   data to it)
3. Detach the disks from the virtual appliance.
4. Create a vm using the configuration that was saved as part of the
   backup flow - (added capabillity to oVirt as part of the Backup API)
5. Attach the restored disks to the created vm.

I encountered following error at step 4.

   fault
reasonOperation Failed/reason
detail[Import VM failed - _*VM Id already exist in the
   system*_. Please remove the VM (RHEL_65_CL1_CLONE2) from the system
   first]/detail
   /fault


I have not deleted/removed the backed up VM from the system.

Is it expected behaviour?
Should it not overwrite the existing VM or Create new VM with different 
VM Id, if the backup up VM exists?



Thanks, Santosh



***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Difference between Disk ID of VM and Disk ID of snapshot

2014-09-04 Thread santosh

Hi All,

The disks IDs provided by following two APIs will be same or different?

* SERVER:PORT/api/vms/VM_ID/snapshots/ID**/disks**
**
** SERVER:PORT/api/vms/VM_ID/disks**
*

Thanks, Santosh




***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question on Backup and Restore API

2014-08-19 Thread santosh


On 08/19/2014 10:24 AM, Liron Aravot wrote:


- Original Message -

From: santosh sba...@commvault.com
To: users@ovirt.org
Sent: Monday, August 18, 2014 7:34:02 PM
Subject: [ovirt-users] Question on Backup and Restore API

Hi,

The link Backup and Restore API has steps for Full VM Backups. The mentioned
steps are,



 1. Take a snapshot of the virtual machine to be backed up - (existing
 oVirt REST API operation)
 2. Back up the virtual machine configuration at the time of the snapshot
 (the disk configuration can be backed up as well if needed) - (added
 capabillity to oVirt as part of the Backup API)
 3. Attach the disk snapshots that were created in (1) to the virtual
 appliance for data backup - (added capabillity to oVirt as part of the
 Backup API)
 4. data can be backed up
 5. Detach the disk snapshots that were attached in (4) from the virtual
 appliance - (added capabillity to oVirt as part of the Backup API)

In the example section, following is the explanation for step 2.



Grab the wanted vm configuration from the needed snapshot - it'll be under
initialization/configuration/data
URL = SERVER:PORT/api/vms/VM_ID/snapshots/ID
   Method = GET

Hi santosh,
you should also have All-Content:true header.
I updated the wiki page accordingly.

Thanks Liron. It gives the information I am looking for.

Thanks, Santosh.


thanks!

But When run the GET request using rest API, I am not finding information for
initialization/configuration/data in the output.
Following is the output of the GET request. Please advise If I am missing
something or looking at wrong place.
I am also attaching xml file with the following content. Please let me know
if you need more information.






snapshot href =
/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094
 id = a8f63b31-1bfa-45ba-a8cc-d10b486f1094  
actions
link href =
/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/restore
 rel = restore  /
/actions
description FirstClick /description
type regular /type
vm id = 4dcd5b6a-cf4b-460c-899d-4edb5345d705  
name RHEL_65_CL1 /name
description This is cluster 1 /description
link href =
/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/cdroms
 rel = cdroms  /
link href =
/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/disks
 rel = disks  /
link href =
/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/nics
 rel = nics  /
type server /type
status
state up /state
/status
memory 1073741824 /memory
cpu
topology sockets = 1  cores = 1  /
architecture X86_64 /architecture
/cpu
cpu_shares 0 /cpu_shares
os type = other  
boot dev = cdrom  /
boot dev = hd  /
/os
high_availability
enabled false /enabled
priority 1 /priority
/high_availability
display
type vnc /type
address172.19.110.43 /address
port 5900 /port
monitors 1 /monitors
single_qxl_pci false /single_qxl_pci
allow_override false /allow_override
smartcard_enabled false /smartcard_enabled
/display
host id = 2704a037-0a61-4f3e-8063-6bd67bdbac36  /
cluster id = ba23117a-708e-40f6-bf32-970c4f86b7ee  /
template id = ----  /
start_time 2014-08-18T11:16:47.307-04:00 /start_time
stop_time 2014-08-18T11:14:26.323-04:00 /stop_time
creation_time 2014-08-08T17:39:52.000-04:00 /creation_time
origin ovirt /origin
stateless false /stateless
delete_protected false /delete_protected
sso
methods
method id = GUEST_AGENT  /
/methods
/sso
initialization/
placement_policy
affinity migratable /affinity
/placement_policy
memory_policy
guaranteed 1073741824 /guaranteed
/memory_policy
usb
enabled false /enabled
/usb
migration_downtime -1 /migration_downtime
/vm
date 2014-08-17T17:03:53.461-04:00 /date
snapshot_status ok /snapshot_status
persist_memorystate false /persist_memorystate
/snapshot

Thanks,
Santosh
***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or
distribution
by others is strictly prohibited. If you have received the message by
mistake,
please advise the sender by reply email and delete the message. Thank you.
**

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you

[ovirt-users] Question on Backup and Restore API

2014-08-18 Thread santosh

Hi,

The link Backup and Restore API 
http://www.ovirt.org/Features/Backup-Restore_API_Integration has steps 
for Full VM Backups. The mentioned steps are,


1. Take a snapshot of the virtual machine to be backed up - (existing
   oVirt REST API operation)
2. Back up the virtual machine configuration at the time of the
   snapshot (the disk configuration can be backed up as well if needed)
   - (added capabillity to oVirt as part of the Backup API)
3. Attach the disk snapshots that were created in (1) to the virtual
   appliance for data backup - (added capabillity to oVirt as part of
   the Backup API)
4. data can be backed up
5. Detach the disk snapshots that were attached in (4) from the virtual
   appliance - (added capabillity to oVirt as part of the Backup API)


In the example section, following is the explanation for step 2.

   Grab the wanted vm configuration from the needed snapshot - it'll be
   under _*initialization/configuration/data *_

  URL = SERVER:PORT/api/vms/VM_ID/snapshots/ID
  Method = GET


But When run the GET request using rest API, I am not finding 
information for _*initialization/configuration/data*_ in the output.
Following is the output of the GET request.  Please advise If I am 
missing something or looking at wrong place.
I am also attaching xml file with the following content.   Please let me 
know if you need more information.


   
*snapshothref=/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094id=a8f63b31-1bfa-45ba-a8cc-d10b486f1094*
   **
   ***actions*
   **
   
*linkhref=/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/restorerel=restore/*
   **
   */actions*
   **
   *description**FirstClick**/description*
   **
   *type**regular**/type*
   **
   ***vmid=4dcd5b6a-cf4b-460c-899d-4edb5345d705*
   **
   *name**RHEL_65_CL1**/name*
   **
   *description**This is cluster 1**/description*
   **
   
*linkhref=/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/cdromsrel=cdroms/*
   **
   
*linkhref=/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/disksrel=disks/*
   **
   
*linkhref=/api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/nicsrel=nics/*
   **
   *type**server**/type*
   **
   ***status*
   **
   *state**up**/state*
   **
   */status*
   **
   *memory**1073741824**/memory*
   **
   ***cpu*
   **
   *topologysockets=1cores=1/*
   **
   *architecture**X86_64**/architecture*
   **
   */cpu*
   **
   *cpu_shares**0**/cpu_shares*
   **
   ***ostype=other*
   **
   *bootdev=cdrom/*
   **
   *bootdev=hd/*
   **
   */os*
   **
   ***high_availability*
   **
   *enabled**false**/enabled*
   **
   *priority**1**/priority*
   **
   */high_availability*
   **
   ***display*
   **
   *type**vnc**/type*
   **
   *address172.19.110.43**/address*
   **
   *port**5900**/port*
   **
   *monitors**1**/monitors*
   **
   *single_qxl_pci**false**/single_qxl_pci*
   **
   *allow_override**false**/allow_override*
   **
   *smartcard_enabled**false**/smartcard_enabled*
   **
   */display*
   **
   *hostid=2704a037-0a61-4f3e-8063-6bd67bdbac36/*
   **
   *clusterid=ba23117a-708e-40f6-bf32-970c4f86b7ee/*
   **
   *templateid=----/*
   **
   *start_time**2014-08-18T11:16:47.307-04:00**/start_time*
   **
   *stop_time**2014-08-18T11:14:26.323-04:00**/stop_time*
   **
   *creation_time**2014-08-08T17:39:52.000-04:00**/creation_time*
   **
   *origin**ovirt**/origin*
   **
   *stateless**false**/stateless*
   **
   *delete_protected**false**/delete_protected*
   **
   ***sso*
   **
   ***methods*
   **
   *methodid=GUEST_AGENT/*
   **
   */methods*
   **
   */sso*
   **
   *initialization/*
   **
   ***placement_policy*
   **
   *affinity**migratable**/affinity*
   **
   */placement_policy*
   **
   ***memory_policy*
   **
   *guaranteed**1073741824**/guaranteed*
   **
   */memory_policy*
   **
   ***usb*
   **
   *enabled**false**/enabled*
   **
   */usb*
   **
   *migration_downtime**-1**/migration_downtime*
   **
   */vm*
   **
   *date**2014-08-17T17:03:53.461-04:00**/date*
   **
   *snapshot_status**ok**/snapshot_status*
   **
   *persist_memorystate**false**/persist_memorystate*
   **
   */snapshot*

Thanks,
Santosh



***Legal Disclaimer

Re: [ovirt-users] Detecting already existing VM on the attached LUN.

2014-08-14 Thread santosh

Thanks Maor.

I am currently using 3.4.

The link provides exactly what I am looking for.

Thanks, Santosh.


On 08/14/2014 05:15 AM, Maor Lipchuk wrote:

Hi Santosh,

Which oVirt version are you using?
If you were using oVirt 3.5 then you might use the Import Storage Domain 
feature to do that. (see [1])

[1] 
http://www.ovirt.org/Features/ImportStorageDomain#Work_flow_for_Import_block_Storage_Domain_-_UI_flow

Regards,
Maor

- Original Message -
From: santosh sba...@commvault.com
To: users@ovirt.org
Sent: Wednesday, August 13, 2014 11:46:16 PM
Subject: [ovirt-users] Detecting already existing VM on the attached LUN.

Hi,

I had a LUN(say L1) from NetApp storage array attached to RHEV iSCSI
Storage Domain.  I had couple of VMs on this storage domain.
I had destroyed this storage domain when LUN became inaccessible for
some reasons. Then I created new Storage Doamin with different LUN(say L2)
and created couple of more VMs on it. Now first LUN (L1) is available.

In this scenario, I have following two questions -

1)Can I attach L1 directly?
2)If I can, will I be able to access VMs present on L1?

Thanks, Santosh.



***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Detecting already existing VM on the attached LUN.

2014-08-13 Thread santosh

Hi,

I had a LUN(say L1) from NetApp storage array attached to RHEV iSCSI 
Storage Domain.  I had couple of VMs on this storage domain.
I had destroyed this storage domain when LUN became inaccessible for 
some reasons. Then I created new Storage Doamin with different LUN(say L2)

and created couple of more VMs on it. Now first LUN (L1) is available.

In this scenario, I have following two questions -

1)Can I attach L1 directly?
2)If I can, will I be able to access VMs present on L1?

Thanks, Santosh.



***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sharing iSCSI data stroage domain across multiple clusters in the same datacenter

2014-08-05 Thread santosh


On 08/04/2014 04:42 PM, Itamar Heim wrote:

On 08/04/2014 05:46 PM, santosh wrote:

On 08/03/2014 03:01 PM, Itamar Heim wrote:

On 07/30/2014 10:35 PM, santosh wrote:

Hi,
*
**Can we share the iSCSI data storage domain across multiple clusters in
the same datacenter?*

Following are the setup details which I tried.

  -One datacenter, Say DC1
  -in DC1, two clusters, say CL1 and CL2
  -In CL1, one host, say H1. And in CL2 one host, say H2
  -iSCSI Data Storage domain is configured where external storage
  LUNs are exported to host H1(A host in CL1 of Datacenter).


While adding H1 to CL1 is succeeded; addition of H2 in CL2 is failing
with following error in vdsm.log.

  Traceback (most recent call last):
 File /usr/share/vdsm/storage/task.py, line 873, in _run
   return fn(*args, **kargs)
 File /usr/share/vdsm/logUtils.py, line 45, in wrapper
   res = f(*args, **kwargs)
 File /usr/share/vdsm/storage/hsm.py, line 1020, in
  connectStoragePool
   spUUID, hostID, msdUUID, masterVersion, domainsMap)
 File /usr/share/vdsm/storage/hsm.py, line 1091, in
  _connectStoragePool
   res = pool.connect(hostID, msdUUID, masterVersion)
 File /usr/share/vdsm/storage/sp.py, line 630, in connect
   self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
 File /usr/share/vdsm/storage/sp.py, line 1153, in __rebuild
   self.setMasterDomain(msdUUID, masterVersion)
 File /usr/share/vdsm/storage/sp.py, line 1360, in setMasterDomain
   raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
  StoragePoolMasterNotFound: Cannot find master domain:
  'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
  msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'
  Thread-13::DEBUG::2014-07-30
  15:24:49,780::task::885::TaskManager.Task::(_run)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._run:
  07997682-8d6b-42fd-acb3-1360f14860d6
  ('a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2', 2,
  '741f7913-09ad-4d96-a225-3bda6d06e042', 1, None) {} failed -
  stopping task
  Thread-13::DEBUG::2014-07-30
  15:24:49,780::task::1211::TaskManager.Task::(stop)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::stopping in state
  preparing (force False)
  Thread-13::DEBUG::2014-07-30
  15:24:49,780::task::990::TaskManager.Task::(_decref)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 1 aborting True
  *Thread-13::INFO::2014-07-30
  15:24:49,780::task::1168::TaskManager.Task::(prepare)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::aborting: Task is
  aborted: 'Cannot find master domain' - code 304*
  Thread-13::DEBUG::2014-07-30
  15:24:49,781::task::1173::TaskManager.Task::(prepare)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Prepare: aborted:
  Cannot find master domain
  Thread-13::DEBUG::2014-07-30
  15:24:49,781::task::990::TaskManager.Task::(_decref)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 0 aborting True
  Thread-13::DEBUG::2014-07-30
  15:24:49,781::task::925::TaskManager.Task::(_doAbort)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._doAbort: force False
  Thread-13::DEBUG::2014-07-30
  15:24:49,781::resourceManager::977::ResourceManager.Owner::(cancelAll)
  Owner.cancelAll requests {}
  Thread-13::DEBUG::2014-07-30
  15:24:49,781::task::595::TaskManager.Task::(_updateState)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
  preparing - state aborting
  Thread-13::DEBUG::2014-07-30
  15:24:49,781::task::550::TaskManager.Task::(__state_aborting)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::_aborting: recover
  policy none
  Thread-13::DEBUG::2014-07-30
  15:24:49,782::task::595::TaskManager.Task::(_updateState)
  Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
  aborting - state failed
  Thread-13::DEBUG::2014-07-30
  15:24:49,782::resourceManager::940::ResourceManager.Owner::(releaseAll)
  Owner.releaseAll requests {} resources {}
  Thread-13::DEBUG::2014-07-30
  15:24:49,782::resourceManager::977::ResourceManager.Owner::(cancelAll)
  Owner.cancelAll requests {}
  Thread-13::ERROR::2014-07-30
  15:24:49,782::dispatcher::65::Storage.Dispatcher.Protect::(run)
  {'status': {'message': Cannot find master domain:
  'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
  msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042', 'code': 304}}

_*Please advise if I need to have one Storage Domain per cluster in
given datacenter.*_

Thanks, Santosh.


***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message

Re: [ovirt-users] Sharing iSCSI data stroage domain across multiple clusters in the same datacenter

2014-08-04 Thread santosh


On 08/03/2014 03:01 PM, Itamar Heim wrote:

On 07/30/2014 10:35 PM, santosh wrote:

Hi,
*
**Can we share the iSCSI data storage domain across multiple clusters in
the same datacenter?*

Following are the setup details which I tried.

 -One datacenter, Say DC1
 -in DC1, two clusters, say CL1 and CL2
 -In CL1, one host, say H1. And in CL2 one host, say H2
 -iSCSI Data Storage domain is configured where external storage
 LUNs are exported to host H1(A host in CL1 of Datacenter).


While adding H1 to CL1 is succeeded; addition of H2 in CL2 is failing
with following error in vdsm.log.

 Traceback (most recent call last):
File /usr/share/vdsm/storage/task.py, line 873, in _run
  return fn(*args, **kargs)
File /usr/share/vdsm/logUtils.py, line 45, in wrapper
  res = f(*args, **kwargs)
File /usr/share/vdsm/storage/hsm.py, line 1020, in
 connectStoragePool
  spUUID, hostID, msdUUID, masterVersion, domainsMap)
File /usr/share/vdsm/storage/hsm.py, line 1091, in
 _connectStoragePool
  res = pool.connect(hostID, msdUUID, masterVersion)
File /usr/share/vdsm/storage/sp.py, line 630, in connect
  self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File /usr/share/vdsm/storage/sp.py, line 1153, in __rebuild
  self.setMasterDomain(msdUUID, masterVersion)
File /usr/share/vdsm/storage/sp.py, line 1360, in setMasterDomain
  raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
 StoragePoolMasterNotFound: Cannot find master domain:
 'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
 msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'
 Thread-13::DEBUG::2014-07-30
 15:24:49,780::task::885::TaskManager.Task::(_run)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._run:
 07997682-8d6b-42fd-acb3-1360f14860d6
 ('a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2', 2,
 '741f7913-09ad-4d96-a225-3bda6d06e042', 1, None) {} failed -
 stopping task
 Thread-13::DEBUG::2014-07-30
 15:24:49,780::task::1211::TaskManager.Task::(stop)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::stopping in state
 preparing (force False)
 Thread-13::DEBUG::2014-07-30
 15:24:49,780::task::990::TaskManager.Task::(_decref)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 1 aborting True
 *Thread-13::INFO::2014-07-30
 15:24:49,780::task::1168::TaskManager.Task::(prepare)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::aborting: Task is
 aborted: 'Cannot find master domain' - code 304*
 Thread-13::DEBUG::2014-07-30
 15:24:49,781::task::1173::TaskManager.Task::(prepare)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Prepare: aborted:
 Cannot find master domain
 Thread-13::DEBUG::2014-07-30
 15:24:49,781::task::990::TaskManager.Task::(_decref)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 0 aborting True
 Thread-13::DEBUG::2014-07-30
 15:24:49,781::task::925::TaskManager.Task::(_doAbort)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._doAbort: force False
 Thread-13::DEBUG::2014-07-30
 15:24:49,781::resourceManager::977::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-13::DEBUG::2014-07-30
 15:24:49,781::task::595::TaskManager.Task::(_updateState)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
 preparing - state aborting
 Thread-13::DEBUG::2014-07-30
 15:24:49,781::task::550::TaskManager.Task::(__state_aborting)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::_aborting: recover
 policy none
 Thread-13::DEBUG::2014-07-30
 15:24:49,782::task::595::TaskManager.Task::(_updateState)
 Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
 aborting - state failed
 Thread-13::DEBUG::2014-07-30
 15:24:49,782::resourceManager::940::ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-13::DEBUG::2014-07-30
 15:24:49,782::resourceManager::977::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-13::ERROR::2014-07-30
 15:24:49,782::dispatcher::65::Storage.Dispatcher.Protect::(run)
 {'status': {'message': Cannot find master domain:
 'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
 msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042', 'code': 304}}

_*Please advise if I need to have one Storage Domain per cluster in
given datacenter.*_

Thanks, Santosh.


***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you

[ovirt-users] Sharing iSCSI data stroage domain across multiple clusters in the same datacenter

2014-07-30 Thread santosh

Hi,
*
**Can we share the iSCSI data storage domain across multiple clusters in 
the same datacenter?*


Following are the setup details which I tried.

   -One datacenter, Say DC1
   -in DC1, two clusters, say CL1 and CL2
   -In CL1, one host, say H1. And in CL2 one host, say H2
   -iSCSI Data Storage domain is configured where external storage
   LUNs are exported to host H1(A host in CL1 of Datacenter).


While adding H1 to CL1 is succeeded; addition of H2 in CL2 is failing 
with following error in vdsm.log.


   Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 873, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 45, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 1020, in
   connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
  File /usr/share/vdsm/storage/hsm.py, line 1091, in
   _connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
  File /usr/share/vdsm/storage/sp.py, line 630, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
  File /usr/share/vdsm/storage/sp.py, line 1153, in __rebuild
self.setMasterDomain(msdUUID, masterVersion)
  File /usr/share/vdsm/storage/sp.py, line 1360, in setMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
   StoragePoolMasterNotFound: Cannot find master domain:
   'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
   msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'
   Thread-13::DEBUG::2014-07-30
   15:24:49,780::task::885::TaskManager.Task::(_run)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._run:
   07997682-8d6b-42fd-acb3-1360f14860d6
   ('a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2', 2,
   '741f7913-09ad-4d96-a225-3bda6d06e042', 1, None) {} failed -
   stopping task
   Thread-13::DEBUG::2014-07-30
   15:24:49,780::task::1211::TaskManager.Task::(stop)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::stopping in state
   preparing (force False)
   Thread-13::DEBUG::2014-07-30
   15:24:49,780::task::990::TaskManager.Task::(_decref)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 1 aborting True
   *Thread-13::INFO::2014-07-30
   15:24:49,780::task::1168::TaskManager.Task::(prepare)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::aborting: Task is
   aborted: 'Cannot find master domain' - code 304*
   Thread-13::DEBUG::2014-07-30
   15:24:49,781::task::1173::TaskManager.Task::(prepare)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Prepare: aborted:
   Cannot find master domain
   Thread-13::DEBUG::2014-07-30
   15:24:49,781::task::990::TaskManager.Task::(_decref)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 0 aborting True
   Thread-13::DEBUG::2014-07-30
   15:24:49,781::task::925::TaskManager.Task::(_doAbort)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._doAbort: force False
   Thread-13::DEBUG::2014-07-30
   15:24:49,781::resourceManager::977::ResourceManager.Owner::(cancelAll)
   Owner.cancelAll requests {}
   Thread-13::DEBUG::2014-07-30
   15:24:49,781::task::595::TaskManager.Task::(_updateState)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
   preparing - state aborting
   Thread-13::DEBUG::2014-07-30
   15:24:49,781::task::550::TaskManager.Task::(__state_aborting)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::_aborting: recover
   policy none
   Thread-13::DEBUG::2014-07-30
   15:24:49,782::task::595::TaskManager.Task::(_updateState)
   Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
   aborting - state failed
   Thread-13::DEBUG::2014-07-30
   15:24:49,782::resourceManager::940::ResourceManager.Owner::(releaseAll)
   Owner.releaseAll requests {} resources {}
   Thread-13::DEBUG::2014-07-30
   15:24:49,782::resourceManager::977::ResourceManager.Owner::(cancelAll)
   Owner.cancelAll requests {}
   Thread-13::ERROR::2014-07-30
   15:24:49,782::dispatcher::65::Storage.Dispatcher.Protect::(run)
   {'status': {'message': Cannot find master domain:
   'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
   msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042', 'code': 304}}

_*Please advise if I need to have one Storage Domain per cluster in 
given datacenter.*_


Thanks, Santosh.




***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] RHEV - Hypervisor directory structure.

2014-07-25 Thread santosh

Hi,

I am trying to understand the directory structure on the RHEV hypervisor.
Below is the part of the directory tree on hypervisor.


   |root@XYZ dom_md] tree 
/rhev/data-center/51a24440-6a1f-48f0-8306-92455fe7aaa1/mastersd/

   /rhev/data-center/51a24440-6a1f-48f0-8306-92455fe7aaa1/mastersd/
   ??? dom_md
   ?   ??? ids - /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/ids
   ?   ??? inbox - /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/inbox
   ?   ??? leases - /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/leases
   ?   ??? master - /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/master
   ?   ??? metadata - /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/metadata
   ?   ??? outbox - /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/outbox
   ??? images
   ?   ??? 7f0be608-0251-4125-a3a1-b4e74bbcaa34
   ?   ?   ??? 53596e07-0317-43b3-838a-13cde56ce1c8 - 
/dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/53596e07-0317-43b3-838a-13cde56ce1c8
   ?   ??? aa6b4787-271f-4651-98c8-97054ff4418d
   ?   ?   ??? 22961431-c139-4311-bc78-c4f5a58cfda7 - 
/dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/22961431-c139-4311-bc78-c4f5a58cfda7
   ?   ?   ??? f5f5f1ff-af71-4d11-a15a-dbc863e5d6f7 - 
/dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/f5f5f1ff-af71-4d11-a15a-dbc863e5d6f7
   ?   ??? e4f70c9e-c5b3-4cbe-a755-684d6a86026f
   ?   ??? 8c2f5f05-c109-45e7-af98-c54437ad5d9e - 
/dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/8c2f5f05-c109-45e7-af98-c54437ad5d9e
   ??? master
??? lost+found
??? tasks
??? vms
??? 1ba7645c-79db-403a-95f6-b3078e441b86
?   ??? 1ba7645c-79db-403a-95f6-b3078e441b86.ovf
??? 6df2c080-a0d5-4202-8f09-ed719184f667
??? 6df2c080-a0d5-4202-8f09-ed719184f667.ovf|


I am trying to understand the *ids, inbox, leases, master, metadata and 
outbox* device files above.


I would appreciate any pointer to get this information.

Thanks.




***Legal Disclaimer***
This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you.
**___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users