[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13766558#comment-13766558
 ] 

Sateesh Chodapuneedi commented on CLOUDSTACK-4664:
--------------------------------------------------

This is issue limited to volume is on zone wide primary storage and there is a 
total of 2 or more zone wide primary storage pools in the zone.

Delay in starting the VM is caused by volume migration across the pools. 
Following log asserts that.

===
2013-09-13 13:28:54,958 DEBUG [cloud.storage.VolumeManagerImpl] 
(Job-Executor-37:job-34 = [ 3bf3d22c-f309-4c8c-a233-cdd30fb1c67b ]) Mismatch in 
storage pool Pool[1|NetworkFilesystem] assigned by deploymentPlanner and the 
one associated with volume Vol[6|vm=6|ROOT]
2013-09-13 13:28:54,958 DEBUG [cloud.storage.VolumeManagerImpl] 
(Job-Executor-37:job-34 = [ 3bf3d22c-f309-4c8c-a233-cdd30fb1c67b ]) Shared 
volume Vol[6|vm=6|ROOT] will be migrated on storage pool 
Pool[1|NetworkFilesystem] assigned by deploymentPlanner
===

When starting a VM, even though volume is ready deployment planner is thinking 
the storage pool containing the volume doesn't fit the deployment plan and 
trying to find other storage pool. This attempt when ended up choosing other 
pool, it triggers a volume migration, which is causing the delay. The following 
condition is resulting in FALSE as there is no clusterid, podid associated with 
a zone wide storage pool.
----
if (plan.getDataCenterId() == exstPoolDcId && plan.getPodId() == exstPoolPodId 
&& plan.getClusterId() == exstPoolClusterId) 
----

Here is the code snippet containing this condition. 

#############
            // If the plan specifies a poolId, it means that this VM's ROOT
            // volume is ready and the pool should be reused.
            // In this case, also check if rest of the volumes are ready and can
            // be reused.
            if (plan.getPoolId() != null) {
                s_logger.debug("Volume has pool already allocated, checking if 
pool can be reused, poolId: "
                        + toBeCreated.getPoolId());
                List<StoragePool> suitablePools = new ArrayList<StoragePool>();
                StoragePool pool = null;
                if (toBeCreated.getPoolId() != null) {
                    pool = (StoragePool) 
this.dataStoreMgr.getPrimaryDataStore(toBeCreated.getPoolId());
                } else {
                    pool = (StoragePool) 
this.dataStoreMgr.getPrimaryDataStore(plan.getPoolId());
                }

                if (!pool.isInMaintenance()) {
                    if (!avoid.shouldAvoid(pool)) {
                        long exstPoolDcId = pool.getDataCenterId();

                        long exstPoolPodId = pool.getPodId() != null ? 
pool.getPodId() : -1;
                        long exstPoolClusterId = pool.getClusterId() != null ? 
pool.getClusterId() : -1;
                        if (plan.getDataCenterId() == exstPoolDcId && 
plan.getPodId() == exstPoolPodId
                                && plan.getClusterId() == exstPoolClusterId) {
                            s_logger.debug("Planner need not allocate a pool 
for this volume since its READY");
                            suitablePools.add(pool);
                            suitableVolumeStoragePools.put(toBeCreated, 
suitablePools);
                            if (!(toBeCreated.getState() == 
Volume.State.Allocated || toBeCreated.getState() == Volume.State.Creating)) {
                                readyAndReusedVolumes.add(toBeCreated);
                            }
                            continue;
                        } else {
                            s_logger.debug("Pool of the volume does not fit the 
specified plan, need to reallocate a pool for this volume");
                        }
                    } else {
                        s_logger.debug("Pool of the volume is in avoid set, 
need to reallocate a pool for this volume");
                    }
                } else {
                    s_logger.debug("Pool of the volume is in maintenance, need 
to reallocate a pool for this volume");
                }
            }

            if (s_logger.isDebugEnabled()) {
                s_logger.debug("We need to allocate new storagepool for this 
volume");
            }

############

                
> [ZWPS] High delay to start a stopped VM which has ROOT/DATA volumes migrated 
> to Second Zone wide primary Storage(More than 10 mins)
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-4664
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4664
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the 
> default.) 
>          Components: Storage Controller
>    Affects Versions: 4.2.1
>            Reporter: Sailaja Mada
>            Priority: Critical
>         Attachments: alllogs.rar, VMStartTimeinVMWARE.png
>
>
> Steps:
> 1. Configure VMWARE with 2 zone wide primary storages
> 2. Deploy 2 VM's
> 3. Stop one of the VM
> 4. Start the VM
> Observation:
> [VMWARE][ZWPS] High delay to start a stopped VM which has ROOT/DATA volumes 
> migrated to Second Zone wide primary Storage(More than 10 mins)
> (Attached all the logs and Snap from vCenter) 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to