[Engine-devel] Instance Types Feature
Hi All, this is the proposed new feature called instance types: http://www.ovirt.org/Features/Instance_Types Long story short - it should basically split the VM template into: - hardware profile called instance types - software profile called image This should enable to do something like: Create a new small VM and attach a disk with RHEL + Postgres installed. Any comments are more than welcome! Tomas ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] RFE: Actions on tasks (jobs)
On Monday 21 January 2013 05:47 PM, Yair Zaslavsky wrote: - Original Message - From: Shireesh Anjal san...@redhat.com To: engine-devel@ovirt.org Sent: Monday, January 21, 2013 2:08:14 PM Subject: [Engine-devel] RFE: Actions on tasks (jobs) Hi, We plan to introduce support for gluster tasks in oVirt, using the current Jobs/steps framework. Which means, any gluster async task started on a cluster will be shown as a Job in the Tasks tab. While this helps in listing and monitoring the status of all gluster tasks, we have a requirement to support a set of actions on such tasks. One should be able to select a task, and then, if supported for that task, perform one or more of the following actions on it: - pause - resume - abort - commit I think this can probably be achieved by introducing a generic mechanism for performing actions on task, allowing each type of task to define what actions are allowed on it in it's current state. Requesting opinions/suggestions on possible ways to achieve this requirement. Several points - 1. By performing actions - you probably mean the entry point to engine will be BllCommand? StopTaskCommand for example? If so, we need to think about the permission of the command. who can stop a for example? What is its role? I see that Job extends BusinessEntityGuid, and hence should be possible to assign permissions on it, just like any other entity? Though I think this doesn't fit directly in the current UI, as jobs is not a main tab. In case of gluster, I would add this permission to the 'GlusterAdmin' role. 2. You mentioned task type - does this mean extending the AsyncTaskType enum? Are you going to have your own Tasks enum? I'm not sure about AsyncTaskType, as I believe it is related to the vdsm async tasks and AsyncTaskManager, and I don't think we'll be going there. In our case, the command being executed in the job itself could be synonymous to the task type. What I mean by task type specific actions is, not all actions are applicable on all types of tasks. e.g. commit is applicable only to a replace brick task, and not to rebalance volume task, whereas abort is applicable to both. So the corresponding command should dictate what action can be performed on the task. Thanks, Shireesh ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] RFE: Actions on tasks (jobs)
On Monday 21 January 2013 08:49 PM, Moti Asayag wrote: On 01/21/2013 02:08 PM, Shireesh Anjal wrote: Hi, We plan to introduce support for gluster tasks in oVirt, using the current Jobs/steps framework. Which means, any gluster async task started on a cluster will be shown as a Job in the Tasks tab. While this helps in listing and monitoring the status of all gluster tasks, we have a requirement to support a set of actions on such tasks. One should be able to select a task, and then, if supported for that task, perform one or more of the following actions on it: - pause - resume - abort - commit Could you explain what is the meaning of 'commit' in this context? The replace brick task in gluster migrates all data from an existing brick to the new brick which is replacing it. After all the data is migrated, user needs to perform a commit for the new brick to come into effect. http://gluster.org/community/documentation/index.php/Gluster_3.1:_Migrating_Volumes In addition, is there a need for a 'restart' action ? At present, we don't need such an action as far as gluster is concerned. Generally, a command (user-action / internal action) is mapped to a Job when it's monitored. The job may contain several steps where each step might represent an async-task (i.e task running on a node, vdsm task). There are two levels of control: controlling a specific vdsm task/step or controlling the entire job. I think that the granularity of the action should be on Job level, since a step's result (assuming cancelled step is considered failed) will determine the job's status as failed/aborted - therefore the rest of running steps should also be aborted. +1 If needed, supporting 'restart' operation could also be relatively easily support for job level (requires saving the action's parameters for a re-run). The Jobs cleanup manager should take care of cleaning cancelled jobs and to keep paused jobs. Does the suggested actions supported by the AsyncTaskManager and by VDSM ? No. These are not vdsm tasks. They'll be managed completely by glusterfs. We plan to introduce gluster specific verbs in vdsm for starting/managing/monitoring these tasks. http://gerrit.ovirt.org/10200 And then have a background periodic job in engine to fetch and update the status of the same in the Job repository. I think this can probably be achieved by introducing a generic mechanism for performing actions on task, allowing each type of task to define what actions are allowed on it in it's current state. Requesting opinions/suggestions on possible ways to achieve this requirement. Thanks, Shireesh ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] RFE: Health Indication on UI
Hi, Currently, oVirt is intelligent to understand the relationship hierarchies of various entities in the system, and even allows setting permissions based on this hierarchy. Can this information further be used to enhance the health monitoring of various entities in the system? Let me take an example. Consider this hierarchy (gluster): System - Cluster - Volume - Brick If one or more bricks of a volume are down, which indicates a problem, following entities should be overlaid with a warning symbol (say, a yellow exclamation mark) - Volume to which the brick(s) belong(s) - it's parent cluster - System This helps in highlighting what parts of the system are affected by failures/problems at the lowest level entities, and gives a better idea of the overall health of the entire environment to the administrator. Thoughts? Thanks, Shireesh ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] RFE: Health Indication on UI
On 22/01/2013 06:47, Shireesh Anjal wrote: Hi, Currently, oVirt is intelligent to understand the relationship hierarchies of various entities in the system, and even allows setting permissions based on this hierarchy. Can this information further be used to enhance the health monitoring of various entities in the system? Let me take an example. Consider this hierarchy (gluster): System - Cluster - Volume - Brick If one or more bricks of a volume are down, which indicates a problem, following entities should be overlaid with a warning symbol (say, a yellow exclamation mark) - Volume to which the brick(s) belong(s) - it's parent cluster - System This helps in highlighting what parts of the system are affected by failures/problems at the lowest level entities, and gives a better idea of the overall health of the entire environment to the administrator. Thoughts? Thanks, Shireesh ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel interesting idea. i wonder if the tree could represent this, and if it would require some refresh for status to update. i wonder if the scheme should be based on status of entities, which requires specific logic, or on some sort of an alert like mechanism (since alerts are similar to problems which you describe above). then you can leverage the fact events/alerts include all entities in the hierarchy, so very easy to highlight an entity as with an issue/alert/problem based on it having any open alerts/problems. ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: New Storage API
2013-1-15 5:34, Ayal Baron: image and volume are overused everywhere and it would be extremely confusing to have multiple meanings to the same terms in the same system (we have image today which means virtual disk and volume which means a part of a virtual disk). Personally I don't like the distinction between image and volume done in ec2/openstack/etc seeing as they're treated as different types of entities there while the only real difference is mutability (images are read-only, volumes are read-write). To move to the industry terminology we would need to first change all references we have today to image and volume in the system (I would say also in ovirt-engine side) to align with the new meaning. Despite my personal dislike of the terms, I definitely see the value in converging on the same terminology as the rest of the industry but to do so would be an arduous task which is out of scope of this discussion imo (patches welcome though ;) Another distinction between Openstack and oVirt is how the Nova/ovirt-engine look upon storage systems. In Openstack, a stand alone storage service(Cinder) exports the raw storage block device to Nova. On the other hand, in oVirt, storage system is highly bounded with the cluster scheduling system which integrates storage sub-system, VM dispatching sub-system, ISO image sub systems. This combination make all of the sub-system integrated in a whole which is easy to deploy, but it make the sub-system more opaque and not harder to reuse and maintain. This new storage API proposal give us an opportunity to distinct these sub-systems as new components which export better, loose-coupling APIs to VDSM. ___ vdsm-devel mailing list vdsm-de...@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail:shum...@cn.ibm.com orshum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] Instance Types Feature
Hi Tomas, I like the idea in general, but for me the instance types looks like a feature that is typical in public clouds, while in private clouds it looks more like a cool extra feature for special cases. Therefore in my opinion it would be great to keep the old template solution as well to keep it simple for most users. A template could be an instance type and an image together. Only the description overlap, and that could be solved. Thanks, Laszlo - Original Message - From: Tomas Jelinek tjeli...@redhat.com To: engine-devel@ovirt.org Sent: Tuesday, January 22, 2013 3:09:51 PM Subject: [Engine-devel] Instance Types Feature Hi All, this is the proposed new feature called instance types: http://www.ovirt.org/Features/Instance_Types Long story short - it should basically split the VM template into: - hardware profile called instance types - software profile called image This should enable to do something like: Create a new small VM and attach a disk with RHEL + Postgres installed. Any comments are more than welcome! Tomas ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: New Storage API
On Tue, Jan 22, 2013 at 11:36:57PM +0800, Shu Ming wrote: 2013-1-15 5:34, Ayal Baron: image and volume are overused everywhere and it would be extremely confusing to have multiple meanings to the same terms in the same system (we have image today which means virtual disk and volume which means a part of a virtual disk). Personally I don't like the distinction between image and volume done in ec2/openstack/etc seeing as they're treated as different types of entities there while the only real difference is mutability (images are read-only, volumes are read-write). To move to the industry terminology we would need to first change all references we have today to image and volume in the system (I would say also in ovirt-engine side) to align with the new meaning. Despite my personal dislike of the terms, I definitely see the value in converging on the same terminology as the rest of the industry but to do so would be an arduous task which is out of scope of this discussion imo (patches welcome though ;) Another distinction between Openstack and oVirt is how the Nova/ovirt-engine look upon storage systems. In Openstack, a stand alone storage service(Cinder) exports the raw storage block device to Nova. On the other hand, in oVirt, storage system is highly bounded with the cluster scheduling system which integrates storage sub-system, VM dispatching sub-system, ISO image sub systems. This combination make all of the sub-system integrated in a whole which is easy to deploy, but it make the sub-system more opaque and not harder to reuse and maintain. This new storage API proposal give us an opportunity to distinct these sub-systems as new components which export better, loose-coupling APIs to VDSM. A very good point and an important goal in my opinion. I'd like to see ovirt-engine become more of a GUI for configuring the storage component (like it does for Gluster) rather than the centralized manager of storage. The clustered storage should be able to take care of itself as long as the peer hosts can negotiate the SDM role. It would be cool if someone could actually dedicate a non-virtualization host where its only job is to handle SDM operations. Such a host could choose to only deploy the standalone HSM service and not the complete vdsm package. -- Adam Litke a...@us.ibm.com IBM Linux Technology Center ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
[Engine-devel] maven build trick
hi, I recommend that you add this script to your build script: rm -rf ~/.m2/repository/org/ovirt This will remove the generated ovirt artifacts from your local maven repo. Why is this needed: If you remove some artifact from the build, that other artifacts are built on, these artifacts will still be able to resolve the missing dependency from the local repo (which is wrong). In this way, your build can still depend on an outdated artifact without breaking the build. This is why you do not notice that the build is broken, maven will just take it as a third party artifact. Thx, Laszlo ___ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Re: [Engine-devel] [vdsm] RFC: New Storage API
- Original Message - From: Adam Litke a...@us.ibm.com To: Shu Ming shum...@linux.vnet.ibm.com Cc: engine-devel engine-devel@ovirt.org, VDSM Project Development vdsm-de...@lists.fedorahosted.org Sent: Tuesday, January 22, 2013 2:20:19 PM Subject: Re: [vdsm] [Engine-devel] RFC: New Storage API On Tue, Jan 22, 2013 at 11:36:57PM +0800, Shu Ming wrote: 2013-1-15 5:34, Ayal Baron: image and volume are overused everywhere and it would be extremely confusing to have multiple meanings to the same terms in the same system (we have image today which means virtual disk and volume which means a part of a virtual disk). Personally I don't like the distinction between image and volume done in ec2/openstack/etc seeing as they're treated as different types of entities there while the only real difference is mutability (images are read-only, volumes are read-write). To move to the industry terminology we would need to first change all references we have today to image and volume in the system (I would say also in ovirt-engine side) to align with the new meaning. Despite my personal dislike of the terms, I definitely see the value in converging on the same terminology as the rest of the industry but to do so would be an arduous task which is out of scope of this discussion imo (patches welcome though ;) Another distinction between Openstack and oVirt is how the Nova/ovirt-engine look upon storage systems. In Openstack, a stand alone storage service(Cinder) exports the raw storage block device to Nova. On the other hand, in oVirt, storage system is highly bounded with the cluster scheduling system which integrates storage sub-system, VM dispatching sub-system, ISO image sub systems. This combination make all of the sub-system integrated in a whole which is easy to deploy, but it make the sub-system more opaque and not harder to reuse and maintain. This new storage API proposal give us an opportunity to distinct these sub-systems as new components which export better, loose-coupling APIs to VDSM. A very good point and an important goal in my opinion. I'd like to see ovirt-engine become more of a GUI for configuring the storage component (like it does for Gluster) rather than the centralized manager of storage. The clustered storage should be able to take care of itself as long as the peer hosts can negotiate the SDM role. It would be cool if someone could actually dedicate a non-virtualization host where its only job is to handle SDM operations. Such a host could choose to only deploy the standalone HSM service and not the complete vdsm package. OpenStack and oVirt have different architectures and goals. Even though they are both marketed as IaaS solutions they are designed for different purposes. OpenStack is designed around the idea of simplifying the *development* and *integration* of IaaS subsystems through standardization of interfaces. If you design a system that requires access to some type of infrastructural resource you can develop against the OpenStack API for that specific resource and you can consume different underlying implementations of the subsystem. Alternatively if you are creating a new subsystem implementations one of your exposed APIs can be the appropriate OpenStack API. In short, they are a group of loosely coupled services meant to be used replicated and distributed in a cluster. Everyone can create they own implementations of the APIs. oVirt is designed around the idea of simplifying the *management* of said infrastructure. The ovirt-engine is the cluster manager and VDSM is the host-manager. Every host in the cluster has a host manager installed on it (VDSM) and some (currently only 1) might have the cluster-manager (ovirt-engine) and they are the effective brain. oVirt ideally only has managing entities. VDSM APIs delegate to other subsystems tasks that are in it's scope, the subsystems have their own APIs. For VMs you have libvirt, for networking you have the linux management tools and maybe netcf for policy we now have MOM. For iscsi we have iscsiadm, etc. The only odd one out is the image provisioning subsystem which I will get in to, don't worry. This means, if you didn't already gather, that no host managed by ovirt can exist without either VDSM or the ovirt-engine living on it. That being said, I am a huge proponent of making all subsystems optional. Meaning you can have VDSM that doesn't have the libvirt or networking glue bits and just has storage, and gluster. To put it simply, no host without a *managing* entity on it. But, as you all have pointed out, VDSM is redundant. There is no reason why the engine can't just directly ask libvirt to do things. There is no reason why we can't make a general iscsi management API and expose it on it's own, independent from other services. VDSM is frankensteinesque abomination of misplaced BL and pass-through APIs. This is why everyone are