[Engine-devel] How to map the oVirt engine version to VDSM version by git tags?
Hi, I am looking for a way to map the oVirt version to VDSM version and engine version by git tags. I can run "git tag -l" under engine git workspace and vdsm git workspace. Here are the output from these two "git tag -l" command. Under oVirt engine workspace: -bash-4.1$ git tag -l ovirt-engine-3.0.0_0001 ovirt-engine-3.1.0 ovirt-engine-3.2.0 ovirt-engine-3.2.1 Under vdsm workspace: -bash-4.1$ git tag -l v4.10.0 v4.10.1 v4.10.2 v4.10.3 v4.9.0 v4.9.1 v4.9.2 v4.9.3 v4.9.3.1 v4.9.3.2 v4.9.3.3 v4.9.4 v4.9.5 v4.9.6 I can checkout the oVirt 3.2.1 snapshot in engine workspace by "git checkout ovirt-engine-3.2.1". But how can I get the VDSM snapshot by the tags in VDSM workspace? How can I know which change-set is for oVirt 3.2.1 in VDSM workspace? -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [RFC] new power management protocol "libvirtp"
Dan Kenigsberg: On Tue, Apr 02, 2013 at 10:15:44AM +0800, Shu Ming wrote: Hi, In oVirt environment, power manager should be enabled for the host to support VM high availability in the cluster. Various kinds of physical power management protocol are supported, ie., ALOM, ILO&etc. However, when the oVirt is running on a nested KVM environment, there is no feasible way to do the power management of the VDSM host(also a KVM virtual machine). A new protocol will be based on libvirt to achieve the power management of a virtual host. The new protocol can be named as "libvirtp". In oVirt engine, a new type will be added to power management---> type libvirtp power management--->address it will be the IP of the physical host where the virtual VDSM host is on when "libvirtp" is selected power management--->user name it will be the user name to the libvirtd service power management--->password it will be the password to the libvirtd service power management--->port it will be the port to the libvirtd service Have you looked into fence_virsh or fence_virt ? Don't they provide what you want? Thanks for your reminder. I think fence_virsh or fence_virt can be leveraged to achieve the goal.. For fence_virsh, it requires virsh to be installed on the vdsm virtual host. For fence_virt, it requires fence_virtd to be installed on the vdsm virtual host and it is not libvirt centric. With these two power management protocol, the oVirt engine still need change to integrate these two protocols for the host power management. -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] [RFC] new power management protocol "libvirtp"
Hi, In oVirt environment, power manager should be enabled for the host to support VM high availability in the cluster. Various kinds of physical power management protocol are supported, ie., ALOM, ILO&etc. However, when the oVirt is running on a nested KVM environment, there is no feasible way to do the power management of the VDSM host(also a KVM virtual machine). A new protocol will be based on libvirt to achieve the power management of a virtual host. The new protocol can be named as "libvirtp". In oVirt engine, a new type will be added to power management---> type libvirtp power management--->address it will be the IP of the physical host where the virtual VDSM host is on when "libvirtp" is selected power management--->user name it will be the user name to the libvirtd service power management--->password it will be the password to the libvirtd service power management--->port it will be the port to the libvirtd service -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Proposal for support ISO domain on other types of file based storage.
Mark Wu : On Sun 17 Mar 2013 10:12:55 PM CST, Ayal Baron wrote: - Original Message - Hi guys, Currently, ISO domain is only supported on NFS storage. It could improve the ease of use if it allows other types of file based storage to store ISO images. After an investigation, I found there's not any restriction on this idea. So the whole work is removing the limitation on engine side. That means engine should allow ISO domain could have different storage type from the data center it's attached, like what we do with nfs ISO domain in SAN DC. I start this idea with localfs. I know local storage can't be seen in cluster level. But it also provides a choice if no NFS available. VMs can be created on the host which has the ISO repo, and then be migrated to any other host in the cluster. I have done the initial patches: allow creation ISO domain on localfs [1] and support import ISO domain on localfs [2] I don't have much experience in java/j2ee/web development and engine architecture. The patches just work for me. I am not sure if it will bring some potential problems. So any feedback on the patch or the idea will be appreciated very much. Haven't looked at the patches yet, but wrt the idea, I agree on the need (being able to attach ISOs from anywhere and not just nfs) but I think the way to do this should be by getting rid of the ISO domain type altogether. I think ISO domain on localfs is useful for a simple setup or demo, such as oVirt all-in-one. Basically what we need is: 1. a way to connect to file based storage (let's leave block aside for now) - this already exists via the connectStorageServer verb 2. a way to list and present a file system tree in gui (give an arbitrary path to vdsm and list content) and possibly filter results by type (vfd, iso) - does not exist today. Possibly some security aspects here that need hashing out. 3. a way to specify a path to a file when attaching an iso/vfd to a VM - this is the way it works today This would devoid the need for isoUploader and allow users to simply manage an nfs export with files. Next step would be to make connectStorageServer support httpfs [1] and then we'd be able to mount ISOs directly over http (hopefully this would be sufficient to support ISOs stored on S3, swift, glance, etc). Actually, we could use the qemu curl backend image support directly. That means we don't need mount the place storing ISO images. We can just maintain a list of ISO image with its link, which could be http, ftp and ssh. That will be fine to start a VM on a existing extern ISO image. I also would like to maintain a ISO image cache on the local host to avoid to re-streaming the ISO image from the ISO image repositories every time. That will be helpful for people who is suffered from the network bottleneck. [1] http://httpfs.sourceforge.net/ Mark. [1] http://gerrit.ovirt.org/#/c/12687/ [2] http://gerrit.ovirt.org/#/c/12916/ ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] function insertstorage_domain_static(uuid, ...) does not exist
Doron, Do you get any idea about it? startup_...@sina.cn: Hi, I built the oVirt engine RPM packages from the latest oVirt engine source code and setup a yum installation server with these packages. I can successfully install the RPM packages on my oVirt engine test server. But when the "engine-setup" command was used to setup the engine sever, the following errors were encountered from engine-setup log. It seems that inst_add_iso_storage_domain was not created properly for psql. Any clue to this problem? 2013-03-05 20:02:19::DEBUG::nfsutils::192::root:: Generating unique uuid 2013-03-05 20:02:19::DEBUG::common_utils::501::root:: running sql query 'select inst_add_iso_storage_domain ('ab22b419-6381-4f0a-8b46-aa72c81edbb5', 'ISO_DOMAIN', 'ead292ee-14dd-40f9-b06b-f2250f7594a9', 'localhost.localdomain:/ISO', 0, 0)' on db server: 'localhost'. 2013-03-05 20:02:19::DEBUG::common_utils::454::root:: Executing command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d engine -c select inst_add_iso_storage_domain ('ab22b419-6381-4f0a-8b46-aa72c81edbb5', 'ISO_DOMAIN', 'ead292ee-14dd-40f9-b06b-f2250f7594a9', 'localhost.localdomain:/ISO', 0, 0)' in working directory '/root' 2013-03-05 20:02:19::DEBUG::common_utils::492::root:: output = 2013-03-05 20:02:19::DEBUG::common_utils::493::root:: stderr = ERROR: NUM:42883, DETAILS:function insertstorage_domain_static(uuid, character varying, character varying, integer, integer, unknown, integer) does not exist 2013-03-05 20:02:19::DEBUG::common_utils::494::root:: retcode = 1 2013-03-05 20:02:19::ERROR::engine-setup::1809::root:: Traceback (most recent call last): File "/bin/engine-setup", line 1804, in _configNfsShare _addIsoDomaintoDB(controller.CONF["sd_uuid"], controller.CONF["ISO_DOMAIN_NAME"]) File "/bin/engine-setup", line 1860, in _addIsoDomaintoDB utils.execRemoteSqlCommand(getDbUser(), getDbHostName(), getDbPort(), basedefs.DB_NAME, sqlQuery, True, output_messages.ERR_FAILED_INSERT_ISO_DOMAIN%(basedefs.DB_NAME)) File "/usr/share/ovirt-engine/scripts/common_utils.py", line 510, in execRemoteSqlCommand return execCmd(cmdList=cmd, failOnError=failOnError, msg=errMsg, envDict=getPgEnv()) File "/usr/share/ovirt-engine/scripts/common_utils.py", line 497, in execCmd raise Exception(msg) Exception: Failed inserting ISO domain into engine db 2013-03-05 20:02:19::DEBUG::setup_sequences::62::root:: Traceback (most recent call last): File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run function() File "/bin/engine-setup", line 1810, in _configNfsShare raise Exception(output_messages.ERR_FAILED_CFG_NFS_SHARE) Exception: Failed to configure NFS share on this host 2013-03-05 20:02:19::DEBUG::engine-setup::1992::root:: *** The following params were used as user input: 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: override-httpd-config: yes 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: http-port: 80 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: https-port: 443 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: random-passwords: no 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: mac-range: 00:1A:4A:A8:01:00-00:1A:4A:A8:01:FF 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: host-fqdn: localhost.localdomain 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: auth-pass: 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: org-name: localdomain 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: application-mode: virt 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: default-dc-type: NFS 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: db-remote-install: local 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: db-host: localhost 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: db-local-pass: 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: nfs-mp: /ISO 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: iso-domain-name: ISO_DOMAIN 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: config-nfs: yes 2013-03-05 20:02:19::DEBUG::engine-setup::1997::root:: firewall-manager: iptables 2013-03-05 20:02:19::ERROR::engine-setup::2413::root:: Traceback (most recent call last): File "/bin/engine-setup", line 2407, in main(confFile) File "/bin/engine-setup", line 2190, in main ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- ?? Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: New Storage API
2013-1-15 5:34, Ayal Baron: image and volume are overused everywhere and it would be extremely confusing to have multiple meanings to the same terms in the same system (we have image today which means virtual disk and volume which means a part of a virtual disk). Personally I don't like the distinction between image and volume done in ec2/openstack/etc seeing as they're treated as different types of entities there while the only real difference is mutability (images are read-only, volumes are read-write). To move to the industry terminology we would need to first change all references we have today to image and volume in the system (I would say also in ovirt-engine side) to align with the new meaning. Despite my personal dislike of the terms, I definitely see the value in converging on the same terminology as the rest of the industry but to do so would be an arduous task which is out of scope of this discussion imo (patches welcome though ;) Another distinction between Openstack and oVirt is how the Nova/ovirt-engine look upon storage systems. In Openstack, a stand alone storage service(Cinder) exports the raw storage block device to Nova. On the other hand, in oVirt, storage system is highly bounded with the cluster scheduling system which integrates storage sub-system, VM dispatching sub-system, ISO image sub systems. This combination make all of the sub-system integrated in a whole which is easy to deploy, but it make the sub-system more opaque and not harder to reuse and maintain. This new storage API proposal give us an opportunity to distinct these sub-systems as new components which export better, loose-coupling APIs to VDSM. ___ vdsm-devel mailing list vdsm-de...@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail:shum...@cn.ibm.com orshum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Live storage migration status in oVirt 3.2
于 2013-1-11 19:07, Daniel Erez 写道: - Original Message - From: "Shu Ming" To: engine-devel@ovirt.org, "VDSM Project Development" Cc: "Federico Simoncelli" , de...@redhat.com Sent: Friday, January 11, 2013 2:45:47 AM Subject: Live storage migration status in oVirt 3.2 Hi, I am reviewing the live storage migration with my oVirt environment updated from the public nightly repository two weeks ago. I found that the "move" button was still gray as before when the VM was up. Only after I deactivated the disk, did the button become into non-gray state. I am wondering if the live storage migration will be supported in oVirt 3.2 release or not. BTW: These patches below should enable the live storage migration already, but I can not see it enabled in my engine. http://gerrit.ovirt.org/5252 Change I91e641cb: pool: live storage migration implementation http://gerrit.ovirt.org/8105 Change subject: core: Live Storage Migration commands http://gerrit.ovirt.org/8103 Change subject: core: VDS Commands for Live Storage Migration http://gerrit.ovirt.org/8102 Change subject: core: Adding VDSM API for Live Storage Migration http://gerrit.ovirt.org/8470 Change subject: webadmin: Adding Live Storage Migration support http://gerrit.ovirt.org/8857 core: disable LiveStorageMigration on 3.1 -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC Hi Shu, When a VM is up, the "move" button is enabled only for Data Center version 3.2. Can you please verify that the selected disk resides on a 3.2 Data Center? I checked my data center compatibility version is 3.1. However, it is interesting that the compatibility version of the cluster in the data center is 3.2 Does the data center allow a higher version cluster? I will try to upgrade the data center version and try the migration again. Best Regards, Daniel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] Live storage migration status in oVirt 3.2
Hi, I am reviewing the live storage migration with my oVirt environment updated from the public nightly repository two weeks ago. I found that the "move" button was still gray as before when the VM was up. Only after I deactivated the disk, did the button become into non-gray state. I am wondering if the live storage migration will be supported in oVirt 3.2 release or not. BTW: These patches below should enable the live storage migration already, but I can not see it enabled in my engine. http://gerrit.ovirt.org/5252 Change I91e641cb: pool: live storage migration implementation http://gerrit.ovirt.org/8105 Change subject: core: Live Storage Migration commands http://gerrit.ovirt.org/8103 Change subject: core: VDS Commands for Live Storage Migration http://gerrit.ovirt.org/8102 Change subject: core: Adding VDSM API for Live Storage Migration http://gerrit.ovirt.org/8470 Change subject: webadmin: Adding Live Storage Migration support http://gerrit.ovirt.org/8857 core: disable LiveStorageMigration on 3.1 -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] ovirt nightly FC17 public repositories broken?
Hi, Any one was encountering this error message when using yum on ovirt public repositories? http://ovirt.org/releases/nightly/rpm/Fedora/17/repodata/primary.xml.gz <http://ovirt.org/releases/nightly/rpm/Fedora/17/repodata/primary.xml.gz>: [Errno -1] Metadata file does not match checksum My vdsm host and engine server was installed with FC17+virt-review, so I used FC17 nightly releases to update my VDSM and engine packages. After a bit investigation, I found these issues. [root@node1-sming ovirt-nightly]# ls -lh * -rw-r--r--. 1 root root 0 Jan 7 16:47 cachecookie -rw-r--r--. 1 root root 702K Jan 7 14:22 filelists.xml.gz -rw-r--r--. 1 root root 132K Jan 7 14:22 other.xml.gz -rw-r--r--. 1 root root 279K Jan 7 14:22 primary.xml.gz -rw-r--r--. 1 root root 1.4K Jan 5 14:35 repomd.xml *<---the date was always Jan 5 different from other files even http://resources.ovirt.org/releases/nightly**/rpm/Fedora/17/repodata/ showed it was Jan 7, also I tried to use wget to download this file and got the same thing.** * [root@node1-sming ovirt-nightly]# sha1sum other.xml.gz filelists.xml.gz primary.xml.gz 453559e86950af07c876931f318d1e92ab58f289 other.xml.gz 5318e237c0f4d2ea3d1ec24e82b9f9bbe9d23a31 filelists.xml.gz e667267d45b576844a5f2cb27ce7bcfb8bb9b4f0 primary.xml.gz [root@node1-sming ovirt-nightly]# cat repomd.xml |grep open-checksum ce6a5243ee81c124d0fe48d103174b93fabe60573e80b5851888298373d3be9e 554a301c526d335ca4a7985e7fc7b0374bca8c061ff5addbd164de81996dc5a0 be3f9e6860cd9a4ac4f3f5000ad503175e7ac79a43185a778d4ab732c0d8190c *Note: These checksum from repmod.xml were quite different from the result of sha1sum above.* Any clues here? -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] FW: Querying for and registering unknown disk images on a storage domain
2012-12-20 23:18, Morrissey, Christopher: Hi All, I've been working on a bit of functionality for the engine that will allow a user to query a domain for new disk images (GetUnregisteredImagesQuery) for which the engine was previously unaware and a separate command to register those images (ImportImageCommand). These commands will be exposed through the REST API. This functionality is needed as we are developing an extension/plugin to oVirt that will allow a NetApp storage controller to handle cloning the actual disks outside of oVirt and need to import them once they are cloned. We'll be using other existing APIs to attach the disk to the necessary VM once the disk is cloned. On the NetApp side, we'll ensure the disk is coalesced before cloning so as to avoid the issues of registering snapshots. I am just curious about how the third party tool like NetApp to make sure the disk of a running VM coalesced before cloning? By an agent in the VM to flush file-system cache out to the disk? GetUnregisteredImagesQuery will be accessible through the disks resource collection on a storage domain. A "disks" resource collection does not yet exist and will need to be added. To access the unregistered images, a parameter (maybe "unregistered=true") would be passed. So the path to "GET" the unregistered disk images on a domain would be something like /api/storagedomains/f0dbcb33-69d3-4899-9352-8e8a02f01bbd/disks?unregistered=true. This will return a list of disk images that can be each used as input to the ImportImageCommand to get them added to oVirt. ImportImageCommand will be accessible through "POST"ing a disk to /api/disks?import=true. The disk will be added to the oVirt DB based on the information supplied and afterward would be available to attach to a VM. When querying for unregistered disk images, the GetUnregisteredImagesQuery command will use the getImagesList() VDSM command. Currently this only reports the GUIDs of all disk images in a domain. I had been using the getVolumesList() and getVolumeInfo() VDSM commands to fill in the information so that valid disk image objects could be registered in oVirt. It seems these two functions are set to be removed since they are too invasive into the internal VDSM workings. The VDSM team will need to either return more information about each disk as part of the getImagesList() function or add a new function getImageInfo() that will give the same information for a given image GUID. Here is the project proposal for floating disk in oVirt. I think unregistered images are also floating disks. http://www.ovirt.org/Features/DetailedFloatingDisk Note that much of this work had originally been submitted under patch http://gerrit.ovirt.org/#/c/9603/. After several reviews it was found to be lacking in its design and was using deprecated APIs that did not yet have replacements. I'm reworking the code now to conform to this design and asking for further input from the VDSM, core, and restapi teams to ensure we can get this done quickly and correctly as it is needed for the 3.2 release. -Chris *Chris Morrissey* Software Engineer NetApp Inc. 919.476.4428 ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- ?? Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: New Storage API
2012-12-11 4:36, Saggi Mizrahi: - Original Message - From: "Adam Litke" To: "Saggi Mizrahi" Cc: "Shu Ming" , "engine-devel" , "VDSM Project Development" Sent: Monday, December 10, 2012 1:39:51 PM Subject: Re: [vdsm] RFC: New Storage API On Thu, Dec 06, 2012 at 11:52:01AM -0500, Saggi Mizrahi wrote: - Original Message - From: "Shu Ming" To: "Saggi Mizrahi" Cc: "VDSM Project Development" , "engine-devel" Sent: Thursday, December 6, 2012 11:02:02 AM Subject: Re: [vdsm] RFC: New Storage API Saggi, Thanks for sharing your thought and I get some comments below. Saggi Mizrahi: I've been throwing a lot of bits out about the new storage API and I think it's time to talk a bit. I will purposefully try and keep implementation details away and concentrate about how the API looks and how you use it. First major change is in terminology, there is no long a storage domain but a storage repository. This change is done because so many things are already called domain in the system and this will make things less confusing for new-commers with a libvirt background. One other changes is that repositories no longer have a UUID. The UUID was only used in the pool members manifest and is no longer needed. connectStorageRepository(repoId, repoFormat, connectionParameters={}): repoId - is a transient name that will be used to refer to the connected domain, it is not persisted and doesn't have to be the same across the cluster. repoFormat - Similar to what used to be type (eg. localfs-1.0, nfs-3.4, clvm-1.2). connectionParameters - This is format specific and will used to tell VDSM how to connect to the repo. Where does repoID come from? I think repoID doesn't exist before connectStorageRepository() return. Isn't repoID a return value of connectStorageRepository()? No, repoIDs are no longer part of the domain, they are just a transient handle. The user can put whatever it wants there as long as it isn't already taken by another currently connected domain. disconnectStorageRepository(self, repoId) In the new API there are only images, some images are mutable and some are not. mutable images are also called VirtualDisks immutable images are also called Snapshots There are no explicit templates, you can create as many images as you want from any snapshot. There are 4 major image operations: createVirtualDisk(targetRepoId, size, baseSnapshotId=None, userData={}, options={}): targetRepoId - ID of a connected repo where the disk will be created size - The size of the image you wish to create baseSnapshotId - the ID of the snapshot you want the base the new virtual disk on userData - optional data that will be attached to the new VD, could be anything that the user desires. options - options to modify VDSMs default behavior returns the id of the new VD I think we will also need a function to check if a a VirtualDisk is based on a specific snapshot. Like: isSnapshotOf(virtualDiskId, baseSnapshotID): No, the design is that volume dependencies are an implementation detail. There is no reason for you to know that an image is physically a snapshot of another. Logical snapshots, template information, and any other information can be set by the user by using the userData field available for every image. Statements like this make me start to worry about your userData concept. It's a sign of a bad API if the user needs to invent a custom metadata scheme for itself. This reminds me of the abomination that is the 'custom' property in the vm definition today. In one sentence: If VDSM doesn't care about it, VDSM doesn't manage it. userData being a "void*" is quite common and I don't understand why you would thing it's a sign of a bad API. Further more, giving the user choice about how to represent it's own metadata and what fields it want to keep seems reasonable to me. Especially given the fact that VDSM never reads it. The reason we are pulling away from the current system of VDSM understanding the extra data is that it makes that data tied to VDSMs on disk format. VDSM on disk format has to be very stable because of clusters with multiple VDSM versions. Further more, since this is actually manager data it has to be tied to the manager backward compatibility lifetime as well. Having it be opaque to VDSM ties it to only one, simpler, support lifetime instead of two. Making userData being opaque gives flexibilities to the management applications. To me, opaque userDaa can have two types at least. The first is the userData for runtime only. The second is the userData expected to be persisted into the metadata disk. For the first type, the management applications can store their own data structures like temporary task states, VDSM query caches &etc. After the VDSM host is fenced, th
Re: [Engine-devel] [vdsm] RFC: New Storage API
于 2012-12-7 13:23, Deepak C Shetty: On 12/06/2012 10:22 PM, Saggi Mizrahi wrote: - Original Message - From: "Shu Ming" To: "Saggi Mizrahi" Cc: "VDSM Project Development" , "engine-devel" Sent: Thursday, December 6, 2012 11:02:02 AM Subject: Re: [vdsm] RFC: New Storage API Saggi, Thanks for sharing your thought and I get some comments below. Saggi Mizrahi: I've been throwing a lot of bits out about the new storage API and I think it's time to talk a bit. I will purposefully try and keep implementation details away and concentrate about how the API looks and how you use it. First major change is in terminology, there is no long a storage domain but a storage repository. This change is done because so many things are already called domain in the system and this will make things less confusing for new-commers with a libvirt background. One other changes is that repositories no longer have a UUID. The UUID was only used in the pool members manifest and is no longer needed. connectStorageRepository(repoId, repoFormat, connectionParameters={}): repoId - is a transient name that will be used to refer to the connected domain, it is not persisted and doesn't have to be the same across the cluster. repoFormat - Similar to what used to be type (eg. localfs-1.0, nfs-3.4, clvm-1.2). connectionParameters - This is format specific and will used to tell VDSM how to connect to the repo. Where does repoID come from? I think repoID doesn't exist before connectStorageRepository() return. Isn't repoID a return value of connectStorageRepository()? No, repoIDs are no longer part of the domain, they are just a transient handle. The user can put whatever it wants there as long as it isn't already taken by another currently connected domain. So what happens when user mistakenly gives a repoID that is in use before.. there should be something in the return value that specifies the error and/or reason for error so that user can try with a new/diff repoID ? I think let the user to give the repoID is meaningless and error-prune. Developer must maintain a a unique ID list for every storage repository connected. disconnectStorageRepository(self, repoId) In the new API there are only images, some images are mutable and some are not. mutable images are also called VirtualDisks immutable images are also called Snapshots There are no explicit templates, you can create as many images as you want from any snapshot. There are 4 major image operations: createVirtualDisk(targetRepoId, size, baseSnapshotId=None, userData={}, options={}): targetRepoId - ID of a connected repo where the disk will be created size - The size of the image you wish to create baseSnapshotId - the ID of the snapshot you want the base the new virtual disk on userData - optional data that will be attached to the new VD, could be anything that the user desires. options - options to modify VDSMs default behavior IIUC, i can use options to do storage offloads ? For eg. I can create a LUN that represents this VD on my storage array based on the 'options' parameter ? Is this the intended way to use 'options' ? returns the id of the new VD I think we will also need a function to check if a a VirtualDisk is based on a specific snapshot. Like: isSnapshotOf(virtualDiskId, baseSnapshotID): No, the design is that volume dependencies are an implementation detail. There is no reason for you to know that an image is physically a snapshot of another. Logical snapshots, template information, and any other information can be set by the user by using the userData field available for every image. createSnapshot(targetRepoId, baseVirtualDiskId, userData={}, options={}): targetRepoId - The ID of a connected repo where the new sanpshot will be created and the original image exists as well. size - The size of the image you wish to create baseVirtualDisk - the ID of a mutable image (Virtual Disk) you want to snapshot userData - optional data that will be attached to the new Snapshot, could be anything that the user desires. options - options to modify VDSMs default behavior returns the id of the new Snapshot copyImage(targetRepoId, imageId, baseImageId=None, userData={}, options={}) targetRepoId - The ID of a connected repo where the new image will be created imageId - The image you wish to copy baseImageId - if specified, the new image will contain only the diff between image and Id. If None the new image will contain all the bits of image Id. This can be used to copy partial parts of images for export. userData - optional data that will be attached to the new image, could be anything that the user desires. options - options to modify VDSMs default behavior Does this function mean that we can copy the image from one repository to another repository? Do
Re: [Engine-devel] [vdsm] RFC: New Storage API
tatus of the image. The status of the image can be either optimized, degraded, or broken. "Optimized" means that the image is available and you can run VMs of it. "Degraded" means that the image is available and will run VMs but it might be a better way VDSM can represent the underlying data. What does the "represent" mean here? "Broken" means that the image can't be used at the moment, probably because not all the data has been set up on the volume. Apart from that VDSM will also return the last persisted status information which will conatin hostID - the last host to try and optimize of fix the image Any host can optimize the image? No need to be SDM? stage - X/Y (eg. 1/10) the last persisted stage of the fix. percent_complete - -1 or 0-100, the last persisted completion percentage of the aforementioned stage. -1 means that no progress is available for that operation. last_error - This will only be filled if the operation failed because of something other then IO or a VDSM crash for obvious reasons. It will usually be set if the task was manually stopped The user can either be satisfied with that information or as the host specified in host ID if it is still working on that image by checking it's running tasks. So we need a function to know what tasks are running on the image checkStorageRepository(self, repositoryId, options={}): A method to go over a storage repository and scan for any existing problems. This includes degraded\broken images and deleted images that have no yet been physically deleted\merged. It returns a list of Fix objects. Fix objects come in 4 types: clean - cleans data, run them to get more space. optimize - run them to optimize a degraded image merge - Merges two images together. Doing this sometimes makes more images ready optimizing or cleaning. The reason it is different from optimize is that unmerged images are considered optimized. mend - mends a broken image The user can read these types and prioritize fixes. Fixes also contain opaque FIX data and they should be sent as received to fixStorageRepository(self, repositoryId, fix, options={}): That will start a fix operation. All major operations automatically start the appropriate "Fix" to bring the created object to an optimize\degraded state (the one that is quicker) unless one of the options is AutoFix=False. This is only useful for repos that might not be able to create volumes on all hosts (SDM) but would like to have the actual IO distributed in the cluster. Other common options is the strategy option: It has currently 2 possible values space and performance - In case VDSM has 2 ways of completing the same operation it will tell it to value one over the other. For example, whether to copy all the data or just create a qcow based of a snapshot. The default is space. You might have also noticed that it is never explicitly specified where to look for existing images. This is done purposefully, VDSM will always look in all connected repositories for existing objects. For very large setups this might be problematic. To mitigate the problem you have these options: participatingRepositories=[repoId, ...] which tell VDSM to narrow the search to just these repositories and imageHints={imgId: repoId} which will force VDSM to look for those image ID just in those repositories and fail if it doesn't find them there. ___ vdsm-devel mailing list vdsm-de...@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] How to update vds_bootstrap.py file under two different directories in engine server?
Hi, I got a question about vds_bootstrap.py file in engine. How do I update the vds_bootstrap.py file after engine and vdsm-bootstrap packages were updated? I mean the file under directory "/usr/share/ovirt-engine/engine.ear/components.war/vds/". It seems vds_bootstrap.py was updated along with the vdsm-bootstrap package into another directory "/usr/share/vdsm-bootstrap" but not the directory "/usr/share/ovirt-engine/engine.ear/components.war/vds/". -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] is gerrit.ovirt.org down?
It seems gerrit has downed for several times recently. Is there any special reason? 于 2012-9-12 22:45, Alon Bar-Lev: yes. - Original Message - From: "Shireesh Anjal" To: engine-devel@ovirt.org Sent: Wednesday, September 12, 2012 5:43:35 PM Subject: [Engine-devel] is gerrit.ovirt.org down? ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Domain rescan action question
ones vdsm returns that oVirt doesn't have, and then either adding or returning those images. So oVirt's db will be used in the comparison. This will work only when scanning storage domains already attached and in use by the current oVirt setup. What I am talking about is what will happen if a LUN that used to be a SD in another oVirt setup is discovered and scanned, with no engine db to compare with. If we don't consider such a use case, life is definitely quite easy, and we're basically within the scope of the orphaned images feature This use case should definitely be considered, maybe have a separate case where the rescan would return all "compatible" disks (i.e. disks that aren't just partial snapshots and the like) if the domain has not yet been mounted. Essentially, it would run the same comparison, but compare against an empty list rather than a list of disks. There's no way it's as simple as that (I'm unsure of the methods oVirt uses to mount a domain), but it's a good starting point. As far as presenting the user with nameless disks, that's a point I hadn't considered; we could generate some sort of placeholder metadata upon addition to show the user that these are new/orphaned disks that were found on the storage domain. Is it safe to assume that the disks discovered by this feature won't be attached to anything? The oVirt paradigm says "if it isn't in the engine db, it's not ours", so any LV or image we discover that is missing from the DB or the snapshot chain of the image in the DB, is nameless, and orphaned. Such an image on a current SD, belonging to a working oVirt setup is definitely an orphaned image. Attaching these to VMs is usually also useless, because they are more often than not discarded snapshots that didn't get discarded cleanly for some reason. Now, if we want to make this usable, we might want to actually check the qcow2 metadata of the image to see whether it's a mid-chain snapshot (and if so it's probably just a candidate for cleanup), or a standalone qcow2 or raw image, and then we can move on with the virt-* tools, to find out the image size and the filesystems it contains. This will at least provide the user with some usable information about the detected image. If we're talking about scanning an SD that doesn't presently belong to the current oVirt setup, then this is even more relevant, because all of the images will have no VM-related context. We're currently working on having disks created outside of the oVirt environment, so not all orphaned disks on the existing storage domain will be artifacts of supposedly-deleted data. For our use case, disk images created by us will be able to be imported into oVirt and attached to a VM created through the engine. Because of this, saying "if it isn't in the engine db, it's not ours" wouldn't necessarily be true. When you talk about checking the metadata, does either oVirt or vdsm have a simple way to do this? A query of some sort would be ideal for this, as it could be run for each image as a qualifier for import. Also, as far as writing the functionality itself, I'm gathering that it should be structured as a query to return these orphaned images, which can then be acted upon/added to the database through a separate command after checking the validity of each image? [1] or a subtab on the storage domain. ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 -- Regards, Dan Yasny Red Hat Israel +972 9769 2280 ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] restapi: Storage-Domain creation - Used Luns
On 2012-7-1 23:10, Andrew Cathrow wrote: - Original Message - From: "Shu Ming" To: "Ori Liel" Cc: "engine-devel" Sent: Sunday, July 1, 2012 10:50:47 AM Subject: Re: [Engine-devel] restapi: Storage-Domain creation - Used Luns SHARED_LUN? We won't be sharing them, we'll be overwriting them. Maybe something like overwrite or force ? So does the lun still belongs to the VG? How about the VG operation to acess the lun while the new storage-domain is accessing the lun? On 2012-7-1 16:50, Ori Liel wrote: We need to enable passing 'used' luns for new storage-domain creation (used = the lun is part of a VG). We need a way in rest-api to explicitly approve the use of such luns. Does anyone have a suggestion for a good name for such a flag, a name that conveys: "use the given luns even if they are part of a VG?" Thanks, Ori. ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] restapi: Storage-Domain creation - Used Luns
SHARED_LUN? On 2012-7-1 16:50, Ori Liel wrote: We need to enable passing 'used' luns for new storage-domain creation (used = the lun is part of a VG). We need a way in rest-api to explicitly approve the use of such luns. Does anyone have a suggestion for a good name for such a flag, a name that conveys: "use the given luns even if they are part of a VG?" Thanks, Ori. ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
On 2012-6-25 22:14, Deepak C Shetty wrote: On 06/25/2012 07:47 AM, Shu Ming wrote: On 2012-6-25 10:10, Andrew Cathrow wrote: - Original Message - From: "Andy Grover" To: "Shu Ming" Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmtintegration On 06/24/2012 07:28 AM, Shu Ming wrote: On 2012-6-23 20:40, Itamar Heim wrote: On 06/23/2012 03:09 AM, Andy Grover wrote: On 06/22/2012 04:46 PM, Itamar Heim wrote: On 06/23/2012 02:31 AM, Andy Grover wrote: On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up? It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun). Is this usage model made difficult or impossible by the current software architecture? what about live snapshots? I'm not a virt guy, so extreme handwaving: vm X uses luns 1& 2 engine -> vdsm "pause vm X" that's pausing the VM. live snapshot isn't supposed to do so. Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM. OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm? Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration. for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions? I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting. I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ? The only issue is the quiesing may time out the VDSM io functions if it takes a non-trivial time. Not sure if VDSM can handle all the time out gracefully. -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
On 2012-6-25 10:10, Andrew Cathrow wrote: - Original Message - From: "Andy Grover" To: "Shu Ming" Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration On 06/24/2012 07:28 AM, Shu Ming wrote: On 2012-6-23 20:40, Itamar Heim wrote: On 06/23/2012 03:09 AM, Andy Grover wrote: On 06/22/2012 04:46 PM, Itamar Heim wrote: On 06/23/2012 02:31 AM, Andy Grover wrote: On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up? It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun). Is this usage model made difficult or impossible by the current software architecture? what about live snapshots? I'm not a virt guy, so extreme handwaving: vm X uses luns 1& 2 engine -> vdsm "pause vm X" that's pausing the VM. live snapshot isn't supposed to do so. Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM. OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm? Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration. for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions? I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting. Thanks -- Regards -- Andy ___ vdsm-devel mailing list vdsm-de...@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
On 2012-6-23 20:40, Itamar Heim wrote: On 06/23/2012 03:09 AM, Andy Grover wrote: On 06/22/2012 04:46 PM, Itamar Heim wrote: On 06/23/2012 02:31 AM, Andy Grover wrote: On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up? It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun). Is this usage model made difficult or impossible by the current software architecture? what about live snapshots? I'm not a virt guy, so extreme handwaving: vm X uses luns 1& 2 engine -> vdsm "pause vm X" that's pausing the VM. live snapshot isn't supposed to do so. Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM. engine -> libstoragemgmt "snapshot luns 1, 2 to luns 3, 4" engine -> vdsm "snapshot running state of X to Y" engine -> vdsm "unpause vm X" if engine had any failure before this step, the VM will remain paused. i.e., we compromised the VM to take a live snapshot. engine -> vdsm "change Y to use luns 3, 4" ? -- Andy ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [virt-node] VDSM as a general purpose virt host manager
On 2012-6-20 5:32, Saggi Mizrahi wrote: There is an important discussion starting about the future of the VDSM API on vdsm-devel. If you want to be involved in the future of the VDSM API don't miss out and join vdsm-devel. To sign up go to https://fedorahosted.org/mailman/listinfo/vdsm-devel There growing need for a way to more easily reuse of the functionality of VDSM in order to service projects other than Ovirt-Engine. Originally VDSM was created as a proprietary agent for the sole purpose of serving the then proprietary version of what is known as ovirt-engine. Red Hat, after acquiring the technology, pressed on with it's commitment to open source ideals and released the code. But just releasing code into the wild doesn't build a community or makes a project successful. Further more when building open source software you should aspire to build reusable components instead of monolithic stacks. We would like to expose a stable, documented, well supported API. This gives us a chance to rethink the VDSM API from the ground up. There is already work in progress of making the internal logic of VDSM separate enough from the API layer so we could continue feature development and bug fixing while designing the API of the future. In order to achieve this though we need to do several things: 1. Declare API supportability guidelines 2. Decide on an API transport (e.g. REST, ZMQ, AMQP) 3. Make the API easily consumable (e.g. proper docs, example code, extending the API, etc) 4. Implement the API itself Now, VDSM version was highly bound with oVirt Engine version. In order to make oVirt to work, both VDSM and ovirt engine should be synced with the latest binary, no back compatibility yet. If we want to break this binding out, we should classify the level of the VDSM APIs like these: 1) public stable 2) public evolving 3) undocumented volatile And the we should make sure public stable interfaces to be supported in a very long time as possible as we can. public evolving interfaces should keep the compatibility in the same major release(ie, 4.x.x). undocumented volatile is not recommended to the application and it is the responsibility of the application to take the risk. All of these are dependent on one another and the permutations are endless. This is why I think we should try and work on each one separately. All discussions will be done openly on the mailing list and until the final version comes out nothing is set in stone. If you think you have anything to contribute to this process, please do so either by commenting on the discussions or by sending code/docs/whatever patches. Once the API solidifies it will be quite difficult to change fundamental things, so speak now or forever hold your peace. Note that this is just an introductory email. There will be a quick follow up email to kick start the discussions. Don't forget to sign up to the vdsm-devel mailing list. ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] Questions about ovirt engine
Hi, I created a VM in my engine. And I found that every time I run the VM after shutdown, the VM uuid was changed to a different not while the VM itself was not changed including the VM name. Why doesn't engine keep the UUID for a VM and use the UUID every time the VM starts? How can I persistent my setting to a VM with "vdsClient", like the password set by the "setVmTicket" command? -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
er/pool free/used space, raid type etc. Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible. _______ vdsm-devel mailing list vdsm-de...@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Deploy ovirt-engine error
t.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 10:39:51,708 ERROR [stderr] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 10:39:51,709 ERROR [stderr] at java.lang.Thread.run(Thread.java:722) 10:39:51,709 ERROR [stderr] Caused by: java.lang.IllegalStateException: JBAS014922: Directory /usr/share/jboss-as/standalone/data/content is not writable 10:39:51,709 ERROR [stderr] at org.jboss.as.repository.ContentRepository$Factory$ContentRepositoryImpl.(ContentRepository.java:123) 10:39:51,709 ERROR [stderr] at org.jboss.as.repository.ContentRepository$Factory.addService(ContentRepository.java:97) 10:39:51,709 ERROR [stderr] at org.jboss.as.server.ApplicationServerService.start(ApplicationServerService.java:134) 10:39:51,710 ERROR [stderr] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) 10:39:51,710 ERROR [stderr] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) 10:39:51,710 ERROR [stderr] ... 3 more 13469 ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] A small patch foy iso-uploader
Hi Can some review and approve this small patch? http://gerrit.ovirt.org/#/c/4554/ -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] engine complained that it couldn't find the ovirtmgmt interface in my FC16 ovirt node
) TX bytes:36667 (35.8 KiB) [root@ovirt-node1 ~]# [root@ovirt-node1 ~]# ifconfig -a|more bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond2 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond3 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond4 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) eth0 Link encap:Ethernet HWaddr 5C:F3:FC:E4:32:A0 inet6 addr: fe80::5ef3:fcff:fee4:32a0/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:2036 errors:0 dropped:0 overruns:0 frame:0 TX packets:246 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:184913 (180.5 KiB) TX bytes:65801 (64.2 KiB) Interrupt:28 Memory:9200-92012800 eth1 Link encap:Ethernet HWaddr 5C:F3:FC:E4:32:A2 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:40 Memory:9400-94012800 loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:13186 errors:0 dropped:0 overruns:0 frame:0 TX packets:13186 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3278642 (3.1 MiB) TX bytes:3278642 (3.1 MiB) ovirtmgmt Link encap:Ethernet HWaddr 5C:F3:FC:E4:32:A0 inet addr:9.181.129.110 Bcast:9.181.129.255 Mask:255.255.255.0 inet6 addr: fe80::5ef3:fcff:fee4:32a0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1562 errors:0 dropped:69 overruns:0 frame:0 TX packets:225 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:116628 (113.8 KiB) TX bytes:62627 (61.1 KiB) p4p1 Link encap:Ethernet HWaddr 00:00:C9:E5:A1:36 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) p4p2 Link encap:Ethernet HWaddr 00:00:C9:E5:A1:3A BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) usb0 Link encap:Ethernet HWaddr 5E:F3:FC:DC:32:A3 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) [root@ovirt-node1 ~]# -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] oVirt automated test suites
Hi, I need a reasonable test suites to test VDSM APIs. I know most of tests are done from the engine side manually. But I think we need a reasonable test suites to define the typical workflows of engine usage calling into VDSM APIs and make sure all the functions are doing well. As engine publics REST-APIs, an automated test suites can be created on top of the REST-APs without much difficult. Any other comments about this idea? -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] is gerrit.ovirt.org down?
It seems the ping latency is high. C:\Documents and Settings\Administrator>ping gerrit.ovirt.org Pinging gerrit.ovirt.org [107.22.212.69] with 32 bytes of data: Reply from 107.22.212.69: bytes=32 time=388ms TTL=46 Reply from 107.22.212.69: bytes=32 time=373ms TTL=46 Reply from 107.22.212.69: bytes=32 time=369ms TTL=46 Reply from 107.22.212.69: bytes=32 time=369ms TTL=46 Ping statistics for 107.22.212.69: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 369ms, Maximum = 388ms, Average = 374ms C:\Documents and Settings\Administrator> On 2012-5-23 0:43, Shireesh Anjal wrote: I have been unable to access the web UI or work with the repository for around half an hour now... ~Shireesh ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Where is the git workspace of "engine-iso-uploader"?
On 2012-5-18 23:26, Juan Hernandez wrote: On 05/18/2012 05:19 PM, Shu Ming wrote: On 2012-5-18 20:54, Juan Hernandez wrote: On 05/18/2012 11:04 AM, Shu Ming wrote: From this page, I am told that it should be in the same workspace as ovirt-engine. http://www.ovirt.org/project/subprojects/ However, I can not find it in ovirt-engine workspace. Where does it go? Try this: git clone git://gerrit.ovirt.org/ovirt-iso-uploader I tried git clone git://gerrit.ovirt.org/engine-iso-uploader, no luck to success. I think ovirt-iso-uploader and enginer-iso-uploader are misused in some time. I just did it again and it works correctly for me: Sorry for confusing. I cloned ovirt-iso-uploader workspace sucessfully. I thought the workspace name was engine-iso-uploader before. $ git clone git://gerrit.ovirt.org/ovirt-iso-uploader Cloning into 'ovirt-iso-uploader'... remote: Counting objects: 37, done. remote: Compressing objects: 100% (20/20), done. remote: Total 37 (delta 10), reused 37 (delta 10) Receiving objects: 100% (37/37), 23.70 KiB, done. Resolving deltas: 100% (10/10), done. Network or DNS problems? -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Where is the git workspace of "engine-iso-uploader"?
On 2012-5-18 20:54, Juan Hernandez wrote: On 05/18/2012 11:04 AM, Shu Ming wrote: From this page, I am told that it should be in the same workspace as ovirt-engine. http://www.ovirt.org/project/subprojects/ However, I can not find it in ovirt-engine workspace. Where does it go? Try this: git clone git://gerrit.ovirt.org/ovirt-iso-uploader I tried git clone git://gerrit.ovirt.org/engine-iso-uploader, no luck to success. I think ovirt-iso-uploader and enginer-iso-uploader are misused in some time. -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] Where is the git workspace of "engine-iso-uploader"?
Hi, From this page, I am told that it should be in the same workspace as ovirt-engine. http://www.ovirt.org/project/subprojects/ However, I can not find it in ovirt-engine workspace. Where does it go? -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Stuck at the firewall.conf in vds_bootstrap
After changing the /scripts/vds_installer.py to remove the "-f" option and correct "-v" to '-V", the host node now can be installed and started. It looks like that the options of vds_installer.py or vds_bootstrap.py was broken. On 2012-5-10 0:23, Shu Ming wrote: Hi, My vds_installer was stuck at running vds_bootstrap when I tried to add one host node in my ovirt-engine. For some reasons, the /tmp/firewall.conf.* file was not downloaded and that caused vds_bootstrap's failure. Is there any quick way to disable the "-f" option for vds_bootstrap in engine server or in host node? That will not allow vdsm to overwrite the firewall configurations in host. --- From the vds_install.xx.log in my host, here is the command where the vds_installer was stuck. /tmp/vds_bootstrap_1a9728df-e3b4-4d7b-8258-835038587bad.py -v -O cstl -t 2012-05-09T16:00:04 -f /tmp/firewall.conf.1a9728df-e3b4-4d7b-8258-835038587bad http://ovirt-engine-112:80/Components/vds/ 9.181.129.110 1a9728df-e3b4-4d7b-8258-835038587bad -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] Stuck at the firewall.conf in vds_bootstrap
Hi, My vds_installer was stuck at running vds_bootstrap when I tried to add one host node in my ovirt-engine. For some reasons, the /tmp/firewall.conf.* file was not downloaded and that caused vds_bootstrap's failure. Is there any quick way to disable the "-f" option for vds_bootstrap in engine server or in host node? That will not allow vdsm to overwrite the firewall configurations in host. --- From the vds_install.xx.log in my host, here is the command where the vds_installer was stuck. /tmp/vds_bootstrap_1a9728df-e3b4-4d7b-8258-835038587bad.py -v -O cstl -t 2012-05-09T16:00:04 -f /tmp/firewall.conf.1a9728df-e3b4-4d7b-8258-835038587bad http://ovirt-engine-112:80/Components/vds/ 9.181.129.110 1a9728df-e3b4-4d7b-8258-835038587bad -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] vds_bootstrap.py 's residency
Hi, I am checking the VDSM and ovirt-engine workspace for "vds_bootstrap.py" file. It was found that vds_bootstrap.py file was in VDSM workspace and was packaged into vdsm-bootstrap rpm package. Also, it was found that in the host installation process, host node will try to get the "vds_bootstrap.py" from the engine server. But "vds_bootstrap.py" is not included in any engine packages. Does that mean we should install vdsm-bootstrap package into engine server? Why not package this file into engine packages instead of vdsm packages? -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Updating ovirt engine from 3.0 to the 3.1.0 version
After "touch engine.ear.doreply" in /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments, ovirt engine can be accessed now. Thanks for your support. On 2012-5-2 0:50, Ofer Schreiber wrote: What about creating the engine.ear.dodeploy file as suggested earlier? On 1 May 2012, at 18:15, Shu Ming <mailto:shum...@linux.vnet.ibm.com>> wrote: Oops. Actually, I deleted the engine.ear.failed and engine.ear was there untouched. Her is the directory hierarchy now in jboss deployment directory. [root@ovirt-engine-112 deployments]# pwd /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments [root@ovirt-engine-112 deployments]# ls -l total 16 lrwxrwxrwx. 1 root root 34 Apr 28 01:14 engine.ear -> /usr/share/ovirt-engine/engine.ear -rw-r--r--. 1 jboss-as jboss-as 8868 Dec 2 05:10 README.txt lrwxrwxrwx. 1 root root 48 Apr 28 01:14 ROOT.war -> /usr/share/ovirt-engine/resources/jboss/ROOT.war -rw-rw-r--. 1 jboss-as jboss-as8 Apr 28 01:14 ROOT.war.deployed [root@ovirt-engine-112 deployments]# On 2012-5-1 4:11, Ofer Schreiber wrote: Please re create the link to engine.ear. That's the base java code that contains webadmin. If it doesn't deploy, create also engine.ear.dodeploy Ofer. On 30 Apr 2012, at 19:23, Shu Ming <mailto:shum...@linux.vnet.ibm.com>> wrote: After deleting the engine.ear in /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments, I can access the page now. Thanks for you suggestions. However, when I tried to access "Administrator Portal(no SSL)". The following errors were encountered. HTTP Status 404 - /webadmin *type* Status report *message* _/webadmin_ *description* _The requested resource (/webadmin) is not available._ JBoss Web/7.0.3.Final On 2012-4-30 15:22, Ofer Schreiber wrote: Looks like something went wrong with the .ear deployment. (already saw such an issue inhttp://lists.ovirt.org/pipermail/engine-devel/2012-January/000483.html) Can you please: 1. Stop JBoss 2. Make sure jboss deployment directory contain a single engine.ear - Remove engine.ear.failed (or similar files if exists) 3. Start JBoss again - Original Message - On 2012-4-29 19:33, Ofer Schreiber wrote: I need more info about this: 1. It looks like a clean install (DB created from scratch). why do you call it an upgrade? Sorry for confusion here. I used engine-cleanup and ran engine-setup again after the 3.1 RPM package were installed. 2. Do you see any error in JBoss/engine logs? (/var/log/jboss-as and /var/log/ovirt-engine usually) [root@ovirt-engine-112 ~]# cat /var/log/jboss-as/console.log 01:15:16,901 INFO [org.jboss.as] (Controller Boot Thread) JBoss AS 7.1.0.Beta1b "Tesla" started in 2405ms - Started 140 of 203 services (61 services are passive or on-demand) 01:15:17,207 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment engine.ear 01:15:17,431 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment ROOT.war 01:15:17,432 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found ROOT.war in deployment directory. To trigger deployment create a file called ROOT.war.dodeploy 01:15:17,432 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found engine.ear in deployment directory. To trigger deployment create a file called engine.ear.dodeploy 01:15:17,478 ERROR [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014612: Operation ("add") failed - address: ([("deployment" => "engine.ear")]): java.lang.IllegalStateException: JBAS014666: Duplicate resource [("deployment" => "engine.ear")] at org.jboss.as.controller.OperationContextImpl.addResource(OperationContextImpl.java:503) [jboss-as-controller-7.1.0.Beta1b.jar:] at org.jboss.as.controller.OperationContextImpl.createResource(OperationContextImpl.java:471) [jboss-as-controller-7.1.0.Beta1b.jar:] at org.jboss.as.server.deployment.DeploymentAddHandler.execute(DeploymentAddHandler.java:170) at java.util.concurrent.FutureTask.run(FutureTask.java:166) [:1.6.0_24] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) [:1.6.0_24] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) [:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [:1.6.0_24]
Re: [Engine-devel] Updating ovirt engine from 3.0 to the 3.1.0 version
Oops. Actually, I deleted the engine.ear.failed and engine.ear was there untouched. Her is the directory hierarchy now in jboss deployment directory. [root@ovirt-engine-112 deployments]# pwd /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments [root@ovirt-engine-112 deployments]# ls -l total 16 lrwxrwxrwx. 1 root root 34 Apr 28 01:14 engine.ear -> /usr/share/ovirt-engine/engine.ear -rw-r--r--. 1 jboss-as jboss-as 8868 Dec 2 05:10 README.txt lrwxrwxrwx. 1 root root 48 Apr 28 01:14 ROOT.war -> /usr/share/ovirt-engine/resources/jboss/ROOT.war -rw-rw-r--. 1 jboss-as jboss-as8 Apr 28 01:14 ROOT.war.deployed [root@ovirt-engine-112 deployments]# On 2012-5-1 4:11, Ofer Schreiber wrote: Please re create the link to engine.ear. That's the base java code that contains webadmin. If it doesn't deploy, create also engine.ear.dodeploy Ofer. On 30 Apr 2012, at 19:23, Shu Ming <mailto:shum...@linux.vnet.ibm.com>> wrote: After deleting the engine.ear in /usr/share/jboss-as-7.1.0.Beta1b/standalone/deployments, I can access the page now. Thanks for you suggestions. However, when I tried to access "Administrator Portal(no SSL)". The following errors were encountered. HTTP Status 404 - /webadmin *type* Status report *message* _/webadmin_ *description* _The requested resource (/webadmin) is not available._ JBoss Web/7.0.3.Final On 2012-4-30 15:22, Ofer Schreiber wrote: Looks like something went wrong with the .ear deployment. (already saw such an issue inhttp://lists.ovirt.org/pipermail/engine-devel/2012-January/000483.html) Can you please: 1. Stop JBoss 2. Make sure jboss deployment directory contain a single engine.ear - Remove engine.ear.failed (or similar files if exists) 3. Start JBoss again - Original Message - On 2012-4-29 19:33, Ofer Schreiber wrote: I need more info about this: 1. It looks like a clean install (DB created from scratch). why do you call it an upgrade? Sorry for confusion here. I used engine-cleanup and ran engine-setup again after the 3.1 RPM package were installed. 2. Do you see any error in JBoss/engine logs? (/var/log/jboss-as and /var/log/ovirt-engine usually) [root@ovirt-engine-112 ~]# cat /var/log/jboss-as/console.log 01:15:16,901 INFO [org.jboss.as] (Controller Boot Thread) JBoss AS 7.1.0.Beta1b "Tesla" started in 2405ms - Started 140 of 203 services (61 services are passive or on-demand) 01:15:17,207 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment engine.ear 01:15:17,431 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015014: Re-attempting failed deployment ROOT.war 01:15:17,432 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found ROOT.war in deployment directory. To trigger deployment create a file called ROOT.war.dodeploy 01:15:17,432 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found engine.ear in deployment directory. To trigger deployment create a file called engine.ear.dodeploy 01:15:17,478 ERROR [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014612: Operation ("add") failed - address: ([("deployment" => "engine.ear")]): java.lang.IllegalStateException: JBAS014666: Duplicate resource [("deployment" => "engine.ear")] at org.jboss.as.controller.OperationContextImpl.addResource(OperationContextImpl.java:503) [jboss-as-controller-7.1.0.Beta1b.jar:] at org.jboss.as.controller.OperationContextImpl.createResource(OperationContextImpl.java:471) [jboss-as-controller-7.1.0.Beta1b.jar:] at org.jboss.as.server.deployment.DeploymentAddHandler.execute(DeploymentAddHandler.java:170) at java.util.concurrent.FutureTask.run(FutureTask.java:166) [:1.6.0_24] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) [:1.6.0_24] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) [:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [:1.6.0_24] at java.lang.Thread.run(Thread.java:679) [:1.6.0_24] at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA.jar:] 01:15:17,482 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS014654: Composite operation was rolled back 01:15:17,483 ERROR [org.jboss.as.server.deployment.scanne
Re: [Engine-devel] Updating ovirt engine from 3.0 to the 3.1.0 version
[ DONE ] Creating CA... [ DONE ] Editing JBoss Configuration... [ DONE ] Setting Database Configuration...[ DONE ] Setting Database Security... [ DONE ] Creating Database... [ DONE ] Updating the Default Data Center Storage Type... [ DONE ] Editing oVirt Engine Configuration...[ DONE ] Editing Postgresql Configuration... [ DONE ] Configuring the Default ISO Domain...[ DONE ] Configuring Firewall (iptables)... [ DONE ] Starting JBoss Service...[ DONE ] Handling HTTPD...[ DONE ] Installation completed successfully ** (Please allow oVirt Engine a few moments to start up.) Additional information: * SSL Certificate fingerprint: C6:01:83:93:4B:2C:2A:38:65:C8:49:C9:17:34:FE:4B:1C:10:D5:FF * SSH Public key fingerprint: 69:8c:bd:05:43:17:0a:43:a3:cc:62:7e:f7:be:0c:42 * A default ISO share has been created on this host. If IP based access restrictions are required, please edit /iso-dom entry in /etc/exports * The firewall has been updated, the old iptables configuration file was saved to /usr/share/ovirt-engine/conf/iptables.backup.011513-04282012_2225 * The installation log file is available at: /var/log/ovirt-engine/engine-setup_2012_04_28_01_13_01.log * Please use the user "admin" and password specified in order to login into oVirt Engine * To configure additional users, first configure authentication domains using the 'engine-manage-domains' utility * To access oVirt Engine please go to the following URL: http://ovirt-engine-112:80 [root@ovirt-engine-112 ~]# -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Updating ovirt engine from 3.0 to the 3.1.0 version
Certificate fingerprint: C6:01:83:93:4B:2C:2A:38:65:C8:49:C9:17:34:FE:4B:1C:10:D5:FF * SSH Public key fingerprint: 69:8c:bd:05:43:17:0a:43:a3:cc:62:7e:f7:be:0c:42 * A default ISO share has been created on this host. If IP based access restrictions are required, please edit /iso-dom entry in /etc/exports * The firewall has been updated, the old iptables configuration file was saved to /usr/share/ovirt-engine/conf/iptables.backup.011513-04282012_2225 * The installation log file is available at: /var/log/ovirt-engine/engine-setup_2012_04_28_01_13_01.log * Please use the user "admin" and password specified in order to login into oVirt Engine * To configure additional users, first configure authentication domains using the 'engine-manage-domains' utility * To access oVirt Engine please go to the following URL: http://ovirt-engine-112:80 [root@ovirt-engine-112 ~]# -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] Updating ovirt engine from 3.0 to the 3.1.0 version
to /usr/share/ovirt-engine/conf/iptables.backup.011513-04282012_2225 * The installation log file is available at: /var/log/ovirt-engine/engine-setup_2012_04_28_01_13_01.log * Please use the user "admin" and password specified in order to login into oVirt Engine * To configure additional users, first configure authentication domains using the 'engine-manage-domains' utility * To access oVirt Engine please go to the following URL: http://ovirt-engine-112:80 [root@ovirt-engine-112 ~]# -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [Jenkins] Auto Generated ovirt-engine rpms for fedora 16
On 2012-3-15 4:45, Eyal Edri wrote: fyi, a new Jenkins job has been added to jenkins.ovirt.org. you can now d/l fresh ovirt-engine rpms after each commit. the rpms are created and latest are stored here: http://jenkins.ovirt.org/view/ovirt_engine/job/ovirt_engine_create_rpms/ the job will also notify the author in case his/her commit will break the rpm build. How does it notify the submitter? By echoing messages or email notification? latest rpms for now: ovirt-engine-3.1.0_0001-1.8.fc16.src.rpm ovirt-engine-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-backend-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-config-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-dbscripts-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-debuginfo-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-genericapi-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-image-uploader-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-iso-uploader-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-jboss-deps-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-log-collector-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-notification-service-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-restapi-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-setup-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-tools-common-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-userportal-3.1.0_0001-1.8.fc16.x86_64.rpm ovirt-engine-webadmin-portal-3.1.0_0001-1.8.fc16.x86_64.rpm Enjoy, Eyal Edri oVirt Infra Team ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] engine.log was gone
On 2012-2-15 18:30, David Jaša wrote: Shu Ming píše v St 15. 02. 2012 v 17:17 +0800: Hi, I deleted the engine.log under /var/log/ovirt-engine by mistake. It seems that the log file can not be recreated whatever engine-clean and engine-setup were run. Even rebooting the server didn't work. Is there any quick way to bring my engine.log back? I would try to touch the file, modify permissions to match those of other log files in directory and then restore its selinux context. That is what I did. But don't know hot to restore its selinux context, do have have an example? David -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] engine.log was gone
Hi, I deleted the engine.log under /var/log/ovirt-engine by mistake. It seems that the log file can not be recreated whatever engine-clean and engine-setup were run. Even rebooting the server didn't work. Is there any quick way to bring my engine.log back? -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Low Level design for HotPlug/HotUnplug feature
Suppose, we unplug and then re-plug a disk, will VDSM expect the disk be in the same device path as before? More generic, will the unplugging and plugging keep the PCI device path as the same as before? On 2012-1-9 17:21, Michael Kublin wrote: Hi, the follow link is providing a low level design for HotPlug/HotUnplug feature : http://www.ovirt.org/wiki/Features/DetailedHotPlug . The feature is simple and design is short Regards Michael ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- Shu Ming IBM China Systems and Technology Laboratory ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel