Re: Failed to copy the volume from the source primary storage pool to secondary storage
Thanks for the response, Nicolas. Here are the log entries from management-server.log corresponding to the specific time period of running the download volume command: 2019-06-10 14:00:35,893 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-86:ctx-68e69b5d job-2512) (logid:3ba6ac78) Executing AsyncJobVO {id:2512, userId: 2, accountId: 2, instanceType: Volume, instanceId: 26, cmd: org.apache.cloudstack.api.command.user.volume.ExtractVolumeCmd, cmdInfo: {"mode":"HTTP_DOWNLOAD","response":"json","ctxUserId":"2","zoneid":"74a2a355-725a-4389-abe5-a24d52b5b7de","httpmethod":"GET","ctxStartEventId":"8780","id":"3a552f54-8d82-452a-ac5c-5144495d38c0","ctxDetails":"{\"interface com.cloud.dc.DataCenter\":\"74a2a355-725a-4389-abe5-a24d52b5b7de\",\"interface com.cloud.storage.Volume\":\"3a552f54-8d82-452a-ac5c-5144495d38c0\"}","ctxAccountId":"2","uuid":"3a552f54-8d82-452a-ac5c-5144495d38c0","cmdEventType":"VOLUME.EXTRACT","_":"1560200437111"}, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 14038012851765, completeMsid: null, lastUpdated: null, lastPolled: null, created: null} 2019-06-10 14:00:35,933 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-86:ctx-68e69b5d job-2512 ctx-addef465) (logid:3ba6ac78) Sync job-2513 execution on object VmWorkJobQueue.13 2019-06-10 14:00:36,307 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-6b017172) (logid:e6da8580) Execute sync-queue item: SyncQueueItemVO {id:327, queueId: 223, contentType: AsyncJob, contentId: 2513, lastProcessMsid: 14038012851765, lastprocessNumber: 114, lastProcessTime: Mon Jun 10 14:00:36 MST 2019, created: Mon Jun 10 14:00:35 MST 2019} 2019-06-10 14:00:36,308 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-6b017172) (logid:e6da8580) Schedule queued job-2513 2019-06-10 14:00:36,331 INFO [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-42:ctx-a1e70224 job-2512/job-2513) (logid:426ea95f) Add job-2513 into job monitoring 2019-06-10 14:00:36,337 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-42:ctx-a1e70224 job-2512/job-2513) (logid:3ba6ac78) Executing AsyncJobVO {id:2513, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkExtractVolume, cmdInfo: rO0ABXNyACBjb20uY2xvdWQudm0uVm1Xb3JrRXh0cmFjdFZvbHVtZfgl82-871PmAgACSgAIdm9sdW1lSWRKAAZ6b25lSWR4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1ldAASTGphdmEvbGFuZy9TdHJpbmc7eHAAAgACAA10ABRWb2x1bWVBcGlTZXJ2aWNlSW1wbAAaAAE, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 14038012851765, completeMsid: null, lastUpdated: null, lastPolled: null, created: Mon Jun 10 14:00:35 MST 2019} 2019-06-10 14:00:36,337 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-42:ctx-a1e70224 job-2512/job-2513) (logid:3ba6ac78) Run VM work job: com.cloud.vm.VmWorkExtractVolume for VM 13, job origin: 2512 2019-06-10 14:00:36,339 DEBUG [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-42:ctx-a1e70224 job-2512/job-2513 ctx-2a57f8fd) (logid:3ba6ac78) Execute VM work job: com.cloud.vm.VmWorkExtractVolume{"volumeId":26,"zoneId":1,"userId":2,"accountId":2,"vmId":13,"handlerName":"VolumeApiServiceImpl"} 2019-06-10 14:00:36,380 ERROR [o.a.c.s.v.VolumeServiceImpl] (Work-Job-Executor-42:ctx-a1e70224 job-2512/job-2513 ctx-2a57f8fd) (logid:3ba6ac78) failed to copy volume to image store java.lang.NullPointerException at org.apache.cloudstack.storage.motion.StorageSystemDataMotionStrategy.isVolumeOnManagedStorage(StorageSystemDataMotionStrategy.java:202) at org.apache.cloudstack.storage.motion.StorageSystemDataMotionStrategy.canHandle(StorageSystemDataMotionStrategy.java:182) at org.apache.cloudstack.storage.helper.StorageStrategyFactoryImpl$1.canHandle(StorageStrategyFactoryImpl.java:52) at org.apache.cloudstack.storage.helper.StorageStrategyFactoryImpl$1.canHandle(StorageStrategyFactoryImpl.java:49) at org.apache.cloudstack.storage.helper.StorageStrategyFactoryImpl.bestMatch(StorageStrategyFactoryImpl.java:95) at org.apache.cloudstack.storage.helper.StorageStrategyFactoryImpl.getDataMotionStrategy(StorageStrategyFactoryImpl.java:49) at org.apache.cloudstack.storage.motion.DataMotionServiceImpl.copyAsync(DataMotionServiceImpl.java:62) at org.apache.cloudstack.storage.motion.DataMotionServiceImpl.copyAsync(DataMotionServiceImpl.java:73) at org.apache.cloudstack.storage.volume.VolumeServiceImpl.copyVolumeFromPrimaryToImage(VolumeServiceImpl.java:1371) at org.apache.cloudstack.storage.volume.VolumeServiceImpl.copyVolume(VolumeServiceImpl.java:1417) at com.cloud.storage.VolumeApiServiceImpl.orchestrateExtractVolume(VolumeApiServiceImpl.java:2515) at com.cloud.storage.VolumeApiServiceImpl.orchestrateExtractVolume(VolumeApiServiceImpl.java:3139)
Failed to copy the volume from the source primary storage pool to secondary storage
Greetings, I am trying to figure out the steps in migrating a KVM based guest to a Xen based guest. I have attempted to download the KVM volume so that I can attempt to attach the volume to a Xen based VM, however I get the following error when trying to download the volume: "Failed to copy the volume from the source primary storage pool to secondary storage" My secondary storage is 90% full, but it's actually got over 500 GB in it and this VM is under 60 GB, so there is plenty of room for the VM on secondary. Any advice or direction would be appreciated. Asai
Snapshots Have Stopped Running On Some VMs
Greetings, I have been doing some maintenance on my Cloudstack instance (4.11.1.0) and I have found that some of my VM snapshots quit running back in February. I have found this error in the management-server.log. 2019-05-02 10:01:21,547 DEBUG [c.c.s.StorageManagerImpl] (StorageManager-Scavenger-1:ctx-5b4e7b0e) (logid:00439c5c) Secondary storage garbage collector found 252 snapshots to cleanup on snapshot_store_ref for store: Avalon Secondary Storage 2019-05-02 10:01:21,552 WARN [c.c.s.StorageManagerImpl] (StorageManager-Scavenger-1:ctx-5b4e7b0e) (logid:00439c5c) problem cleaning up snapshots in snapshot_store_ref for store: Avalon Secondary Storage at org.apache.cloudstack.storage.snapshot.SnapshotObject.getId(SnapshotObject.java:159) at org.apache.cloudstack.storage.snapshot.SnapshotObject.getChild(SnapshotObject.java:124) Can anyone shed some light on this, and advise how I can get my snapshots running again? Asai
Re: Resource Allocation Question
Thank you for your response. So, is there any way we can take advantage of the processor speed that doesn’t seem to be allocated? We’re not clear on how this works. It seems like there’s a tremendous amount of underutilization here? Although we have allocated 20 out of 24 cores, it seems like we’re not able to access the resources the processors like they should be able to be. What are we missing here? > On Dec 13, 2018, at 2:27 PM, Andrija Panic wrote: > > In my case, hyperthreading number of cores (whatever OS sees as number of > cores) times the TurboBoost GHZ is what I get, so for 2x8 cored 2.6Ghz (32 > cores with hyperthreading) I get 32 x 3.4 Ghz, which is not what you can > achieve at any time (that frequency can be obtained on only very few cores > at any moment, not nearly all cores). > > Go to Host in GUI and check its statistics, you will see - take into > account any overprovisioning applied > > Cheers > > On Thu, Dec 13, 2018, 20:19 Rafael Weingärtner wrote: > >> You have a four core CPU, with 8 threads each? Then, each thread is 2.4 >> GHz. So, 2.4 * 4 * 8. >> >> On Thu, Dec 13, 2018 at 6:14 PM Asai wrote: >> >>> Greetings, >>> >>> I have a simple question regarding Cloudstack resource allocation. >>> >>> In the dashboard, I’m seeing that our Memory is 25 out of 30 GB and the >>> circle is colored red. # of CPUs is 20 out of 24 and is colored red. >> But >>> CPU is 33 GHz out of 76 GHz. We have a single Xeon E5-2620 v3 @ 2.40GHz. >>> This is a 4 core / 8 thread CPU. How is that we have 76 GHz available? >> It >>> seems like we’d have around 57 or so GHz available. >>> >>> Thanks for your assistance. >>> >>> Asai >>> >>> >>> >> >> -- >> Rafael Weingärtner >>
Resource Allocation Question
Greetings, I have a simple question regarding Cloudstack resource allocation. In the dashboard, I’m seeing that our Memory is 25 out of 30 GB and the circle is colored red. # of CPUs is 20 out of 24 and is colored red. But CPU is 33 GHz out of 76 GHz. We have a single Xeon E5-2620 v3 @ 2.40GHz. This is a 4 core / 8 thread CPU. How is that we have 76 GHz available? It seems like we’d have around 57 or so GHz available. Thanks for your assistance. Asai
Re: Upgrading from 4.9 to 4.10
Hello Dag, I took your direction and added more secondary storage, and that worked for a little while, but the secondary storage has crept over 80% again and I can’t add another VM again. I actually have nearly 2 TB of secondary storage available but that’s still considered "low storage space" by Cloudstack. Isn’t there any way to override this? Thanks, Asai > On Aug 16, 2018, at 1:06 AM, Dag Sonstebo wrote: > > Asai, > > The simplest way is to just add another secondary storage pool. > > Regards, > Dag Sonstebo > Cloud Architect > ShapeBlue > > On 15/08/2018, 22:13, "Asai" wrote: > >Another question on this subject, our secondary storage is throwing alerts > for low storage and it seems like I can’t upload anything to it at this > point. Can I change any settings to allow continued use of secondary storage > even when storage is low, since we’re only talking about a few hundred > megabytes here? > >Thanks, >Asai > > > > dag.sonst...@shapeblue.com > www.shapeblue.com > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > > >> On Aug 15, 2018, at 9:17 AM, Asai wrote: >> >> OK, thanks for that advice. >> >> I found out the problem. It was lack of storage. >> Asai >> >> >>> On Aug 15, 2018, at 9:14 AM, ilya musayev >>> wrote: >>> >>> +1 on 4.11 - it’s LTS release and got much more attention >>> >>> On Wed, Aug 15, 2018 at 9:13 AM Dag Sonstebo >>> wrote: >>> >>>> Asai, >>>> >>>> First of all I strongly advise you to upgrade to 4.11.1 instead of 4.10 – >>>> this will cause you a lot less pain. >>>> >>>> With regards to the template upload in 4.9 – do template uploads normally >>>> work? I’d suggest you check through the management-server.log and cloud.log >>>> on the SSVM to troubleshoot further. Also maybe destroy the SSVM and let >>>> this recreate, just in case it’s not healthy. >>>> >>>> Regards, >>>> Dag Sonstebo >>>> Cloud Architect >>>> ShapeBlue >>>> >>>> On 15/08/2018, 17:09, "Asai" wrote: >>>> >>>> Greetings, >>>> >>>> We’re attempting an upgrade from 4.9 to 4.10, but we cannot seem to >>>> get past the SystemVM 4.10 download stage. When registering a new template >>>> according to the documentation, the newly created systemvm-4.10 never >>>> enters the ready state. I have tried downloading from the repository as >>>> well as uploading the systemvm from my local computer but it never seems to >>>> complete, and we cannot move forward. >>>> >>>> Can anyone share any insights into this problem? >>>> Asai >>>> >>>> >>>> dag.sonst...@shapeblue.com >>>> www.shapeblue.com >>>> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >>>> @shapeblue >>>> >>>> >>>> >>>> >> > > >
Re: Autostarting VMs on KVM?
Thank you, Makrand. On 8/27/2018 11:11 AM, Makrand wrote: Hi Asai, The Server offering with HA enabled will do trick. While launching the VM just choose this SO. In case your previous SO was not ha enabled (and thus VM) you can actually change the SO and relaunch VM. Just test it before on one of the VMs. The VMs with HA enabled will come back on its own once the standalone host comes back online (Assuming VMs went down abruptly while host went down and not shutdown manually) Note- By default, all virtual router VMs and Elastic Load Balancing VMs are automatically configured as HA-enabled. -- Makrand On Thu, Aug 16, 2018 at 2:41 AM, Asai wrote: Thanks, Eric, Do they have to be already created as HA instances? Can you turn on HA after the fact? Also, what if it’s only one standalone server with no failover? Asai On Aug 15, 2018, at 1:39 PM, Eric Lee Green wrote: If you set the offering to allow HA and create the instances as HA instances, they will autostart once the management server figures out they're really dead (either because it used STONITH to kill the unreachable node, or because that node became reachable again). When I had to reboot my cluster due to a massive network failure (critical 10 gigabit switch croaked, had to slide a new one in), all the instances marked "HA" came back up all by themselves without me having to do anything about it. On 8/15/18 09:11, Asai wrote: Thanks, Dag, Looks like scripting it is the way to go. Asai On Aug 15, 2018, at 9:06 AM, Dag Sonstebo wrote: Hi Asai, In short – no that is not a use case CloudStack is designed for, the VM states are controlled by CloudStack management. You should however look at using HA service offerings and host HA (if you meet all the pre-requisites). Between these mechanisms VMs can be brought up on other hosts if a host goes down. Alternatively if you are looking to trigger an automated startup of VMs I suggest you simply script this with e.g. cloudmonkey. Keep in mind this still requires a healthy management server though. Regards, Dag Sonstebo Cloud Architect ShapeBlue On 15/08/2018, 16:47, "Asai" wrote: Thanks, Dag, On boot of the server, I would like the VMs to start up automatically, rather than me having to go to the management console and start them manually. We suffered some downtime and in restarting the hardware, I had to manually get everything back up and running. Asai dag.sonst...@shapeblue.com www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue On Aug 15, 2018, at 1:22 AM, Dag Sonstebo wrote: Hi Asai, Can you explain a bit more what you are trying to achieve? Everything in CloudStack is controlled by the management server, not the KVM host, and in general the assumption is a KVM host is always online. Regards, Dag Sonstebo Cloud Architect ShapeBlue On 15/08/2018, 03:38, "Asai" wrote: Greetings, Can anyone offer advice on how to autostart VMs at boot time using KVM? There doesn’t seem to be any documentation for this in the CS docs. We’re on CS 4.9.2.0. I tried doing it with virsh autostart, but it just throws an error. Thank you, Asai dag.sonst...@shapeblue.com www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
Re: KVM Live Snapshots
OK, thanks, Dag. I think I finally get it. On 8/27/2018 2:47 AM, Dag Sonstebo wrote: Hi Asai, In the context of CloudStack your metadata is effectively in the CloudStack DB. If you want to capture the point-in-time settings for the VMs in question you would simply do a "virsh dumpxml " against the VM and capture this data somehow. Keep in mind though it's not going to be particularly useful to you under CloudStack. If you ever needed to restore a VM you would do so by importing the volumes you backed up, create a template from the root volume, create a new VM from this and attach any imported data disks - and during this process CloudStack would build new metadata for you since you are effectively building a new VM. Regards, Dag Sonstebo Cloud Architect ShapeBlue On 24/08/2018, 18:35, "Asai" wrote: Thanks, Dag. Then what do you do in case your VM metadata is lost? With XenServer you can export the VM as an XVA file. Then re-import into XenCenter as a whole VM. Is there nothing so simple in KVM / Cloudstack? How do you keep your VM metadata and your disk in a recoverable package? Asai On 8/24/2018 1:58 AM, Dag Sonstebo wrote: > Sorry I should also have pointed out - the method outlined below is effectively the same as the steps carried out during a volume snapshot process - where the convert writes the snapshot to secondary storage. > > Regards, > Dag Sonstebo > Cloud Architect > ShapeBlue > > On 24/08/2018, 09:51, "Dag Sonstebo" wrote: > > Hi Asai, > > To answer your previous question - VM snapshots are inline in the qcow2 image, i.e. contained in the disk itself, and you need to use qemu-img convert to write this to a separate file. The following should point you in the right direction: > > root@ref-trl-678-k-M7-dsonstebo-kvm2:~# virsh list > IdName State > > 1 s-1-VM running > 2 v-3-VM running > 4 i-2-4-VM running > > root@ref-trl-678-k-M7-dsonstebo-kvm2:~# virsh snapshot-list 4 > Name Creation Time State > > i-2-4-VM_VS_20180824084100 2018-08-24 08:34:00 + running > > root@ref-trl-678-k-M7-dsonstebo-kvm2:~# virsh snapshot-info 4 --snapshotname i-2-4-VM_VS_20180824084100 > Name: i-2-4-VM_VS_20180824084100 > Domain: i-2-4-VM > Current:yes > State: running > Location: internal > Parent: - > Children: 0 > Descendants:0 > Metadata: yes > > > > In the db: > > > SELECT * FROM cloud.vm_snapshots > > 1. row * > id: 1 > uuid: 4ad297d6-ea70-418c-9df3-bf9ccde3eb8c > name: i-2-4-VM_VS_20180824084100 > display_name: livesnap1 > description: >vm_id: 4 > account_id: 2 >domain_id: 1 > service_offering_id: 1 > vm_snapshot_type: DiskAndMemory >state: Ready > parent: > current: 1 > update_count: 2 > updated: 2018-08-24 08:42:30 > created: 2018-08-24 08:41:00 > removed: > > > > > To write the above inline snapshot to disk you would do something like this: > > qemu-img convert -f qcow2 -O qcow2 -s i-2-4-VM_VS_20180824084100 /mnt/pathtoqcow2fileforVM /tmp/mycopiedsnapshot.qcow2 > > > > Regards, > Dag Sonstebo > Cloud Architect > ShapeBlue > > On 24/08/2018, 02:13, "Ivan Kudryavtsev" wrote: > > Therea are API calls which enable creation of image snapshots from VM > snapshot. I suppose it's the thing Simon is talking about. It doesn't help > with full VM image backup (incl RAM) but it helps doing synchronous same > timestamp backup across all VM volumes. Actually it's th
Re: KVM Live Snapshots
This sounds like a great idea, except where can I find the VM snapshot in the file system? I’ve checked the database for some kind of indication, and I’ve check primary and secondary storage to try to locate this snapshot file but I can’t find it… Any insights on this? Thanks! Asai > On Aug 23, 2018, at 2:25 PM, Simon Weller wrote: > > There are lots of ways you can implement a Business Continuity or DR plan. > > Some folks implement a second region or zone in a different market and build > their applications or services to be resilient across different data centers > (and/or markets). This often involved various forms of data replication (DB, > file et al). > > If you rely on secondary storage for backups, the assumption here is that it > uses a different storage system than your primary storage and it can be used > for recovery if your primary storage was to fail. > > > Now since the VM snapshot feature can be called by API and the resulting > QCOW2 file is written to primary storage, you could use a script to execute > the snapshot and then copy off the QCOW2 files somewhere else. > > You could also use something like the Veeam agent - > https://www.veeam.com/windows-linux-availability-agents.html and backup your > VMs to an offsite NFS mount. > > > - Si > > > > > > From: Asai > Sent: Thursday, August 23, 2018 4:06 PM > To: users@cloudstack.apache.org > Subject: Re: KVM Live Snapshots > > So, I think this is kind of an elephant in the room. > > How do we get a standalone VM backup? Or what is the best way to back up > Cloudstack? > > Right now we are making regular DB backups, and backing up secondary storage > (for volume snapshots). But in case of disaster, how do we recover this? > > Is there third party software available? > Asai > > >> On Aug 22, 2018, at 10:17 AM, Ivan Kudryavtsev >> wrote: >> >> There is no way to run scheduled snapshots for whole vm, at least with KVM. >> I suppose the function is for adhoc only, especially as you may know they >> are not copied to secondary storage. >> >> чт, 23 авг. 2018 г., 0:10 Asai : >> >>> Great, thanks for that. >>> >>> So, is there a way then to make these whole VM snapshots recurring like >>> recurring volume snapshots? >>> >>> What are best practices for recovering a volume snapshot? e.g. disaster >>> recovery scenario? >>> >>> Asai >>> >>> >>> >>> >
Re: KVM Live Snapshots
Thanks, Simon. So Cloudmonkey could call the VM snapshot? On August 23, 2018 2:25:53 PM MST, Simon Weller wrote: >There are lots of ways you can implement a Business Continuity or DR >plan. > >Some folks implement a second region or zone in a different market and >build their applications or services to be resilient across different >data centers (and/or markets). This often involved various forms of >data replication (DB, file et al). > >If you rely on secondary storage for backups, the assumption here is >that it uses a different storage system than your primary storage and >it can be used for recovery if your primary storage was to fail. > > >Now since the VM snapshot feature can be called by API and the >resulting QCOW2 file is written to primary storage, you could use a >script to execute the snapshot and then copy off the QCOW2 files >somewhere else. > >You could also use something like the Veeam agent - >https://www.veeam.com/windows-linux-availability-agents.html and backup >your VMs to an offsite NFS mount. > > >- Si > > > > > >From: Asai >Sent: Thursday, August 23, 2018 4:06 PM >To: users@cloudstack.apache.org >Subject: Re: KVM Live Snapshots > >So, I think this is kind of an elephant in the room. > >How do we get a standalone VM backup? Or what is the best way to back >up Cloudstack? > >Right now we are making regular DB backups, and backing up secondary >storage (for volume snapshots). But in case of disaster, how do we >recover this? > >Is there third party software available? >Asai > > >> On Aug 22, 2018, at 10:17 AM, Ivan Kudryavtsev > wrote: >> >> There is no way to run scheduled snapshots for whole vm, at least >with KVM. >> I suppose the function is for adhoc only, especially as you may know >they >> are not copied to secondary storage. >> >> чт, 23 авг. 2018 г., 0:10 Asai : >> >>> Great, thanks for that. >>> >>> So, is there a way then to make these whole VM snapshots recurring >like >>> recurring volume snapshots? >>> >>> What are best practices for recovering a volume snapshot? e.g. >disaster >>> recovery scenario? >>> >>> Asai >>> >>> >>> >>> -- Asai
Re: KVM Live Snapshots
So, I think this is kind of an elephant in the room. How do we get a standalone VM backup? Or what is the best way to back up Cloudstack? Right now we are making regular DB backups, and backing up secondary storage (for volume snapshots). But in case of disaster, how do we recover this? Is there third party software available? Asai > On Aug 22, 2018, at 10:17 AM, Ivan Kudryavtsev > wrote: > > There is no way to run scheduled snapshots for whole vm, at least with KVM. > I suppose the function is for adhoc only, especially as you may know they > are not copied to secondary storage. > > чт, 23 авг. 2018 г., 0:10 Asai : > >> Great, thanks for that. >> >> So, is there a way then to make these whole VM snapshots recurring like >> recurring volume snapshots? >> >> What are best practices for recovering a volume snapshot? e.g. disaster >> recovery scenario? >> >> Asai >> >> >> >>
Re: KVM Live Snapshots
Great, thanks for that. So, is there a way then to make these whole VM snapshots recurring like recurring volume snapshots? What are best practices for recovering a volume snapshot? e.g. disaster recovery scenario? Asai
Re: KVM Live Snapshots
Thank you for your responses, What’s the difference, then, between a "VM" snapshot and a "VOLUME" snapshot? I liked how in XenServer, you could export a whole VM by first taking a snapshot. This was great for disaster recovery backup, is there a way to do something similar in Cloudstack? Asai > On Aug 22, 2018, at 8:51 AM, Simon Weller wrote: > > Make sure you have kvm.snapshot.enabled set to true in Global Settings. This > setting change will probably require a management server restart. > > > - Si > > > > > > From: Asai > Sent: Wednesday, August 22, 2018 10:44 AM > To: users@cloudstack.apache.org > Subject: KVM Live Snapshots > > Greetings, > > We successfully upgraded to 4.11.1. One of the main reasons we did this was > that we thought this would enable us to do live KVM snapshots of running VMs. > This doesn’t seem to be the case, though. When I try to snapshot a running > VM, I just get the message: "KVM VM does not allow to take a disk-only > snapshot when VM is in running state" > > Is there a way currently to do this with Cloudstack and KVM VMs? > > Asai > >
KVM Live Snapshots
Greetings, We successfully upgraded to 4.11.1. One of the main reasons we did this was that we thought this would enable us to do live KVM snapshots of running VMs. This doesn’t seem to be the case, though. When I try to snapshot a running VM, I just get the message: "KVM VM does not allow to take a disk-only snapshot when VM is in running state" Is there a way currently to do this with Cloudstack and KVM VMs? Asai
Re: 4.9 to 4.11 upgrade broken
Before upgrading the router, can I restart the network and check "Make redundant" so that VMs don’t becoming inaccessible during upgrade? Will this work without upgrading first? Asai > On Aug 21, 2018, at 11:18 PM, Sergey Levitskiy wrote: > > You can either Restart the network with cleanup or simply destroy VR and let > it be created on the next VM deployment. > > On 8/21/18, 11:13 PM, "Asai" wrote: > >Thanks, nearly back up and running. One question, what about the Virtual > Router upgrade? What do I do if the upgrade fails on the Virtual Router? > Looking for docs on this, but can’t find anything. > >Thanks for your assistance. > >Asai > > >> On Aug 21, 2018, at 6:35 PM, Sergey Levitskiy wrote: >> >> Yes. this should bring you back. >> However if you perform what you described in your previous reply + rename >> template in CS DB to systemvm-kvm-4.11.1 from what it is now >> systemvm-kvm-4.11 you should be able to bring all up as it is. Updating >> template image is not enough. >> >> On 8/21/18, 4:04 PM, "Asai" wrote: >> >> OK thanks a lot, Sergey, >> >> That helps. What’s the best method to roll back? Just use yum to roll >> back to 4.9 and rebuild the DB from backup? >> >> >>> On Aug 21, 2018, at 3:59 PM, Sergey Levitskiy wrote: >>> >>> The fastest and easiest way is to rollback both DB and management server >>> and start over. You need to have correct systemVM template registered >>> before you initiate an upgrade. >>> >>> Thanks, >>> Sergey >>> >>> >>> On 8/21/18, 2:30 PM, "Asai" wrote: >>> >>> Is there anybody out there that can assist with this? >>> >>> Asai >>> >>> >>>> On Aug 21, 2018, at 2:01 PM, Asai wrote: >>>> >>>> Is there any more specific instruction about this? >>>> >>>> What is the best practice? Should I roll back first? Is there any >>>> documentation about rolling back? Do I uninstall cloudstack management >>>> and re-install 4.9? >>>> >>>> Or is it as simple as just overwriting the file? If so, what about the >>>> template.properties file and the metadata in there like qcow2.size? >>>> >>>> filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >>>> id=225 >>>> qcow2.size=316310016 >>>> public=true >>>> uniquename=225-2-826a2950-bb8e-34dd-9420-1eb24ea16b4a >>>> qcow2.virtualsize=2516582400 >>>> virtualsize=2516582400 >>>> checksum=2d8d1e4eacc976814b97f02849481433 >>>> hvm=true >>>> description=systemvm-kvm-4.11 >>>> qcow2=true >>>> qcow2.filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >>>> size=316310016 >>>> >>>> >>>> Asai >>>> >>>> >>>>> On Aug 21, 2018, at 1:56 PM, ilya musayev >>>>> wrote: >>>>> >>>>> yes - please try the proper 4.11 systemvm templates. >>>>> >>>>>> On Aug 21, 2018, at 1:54 PM, Asai wrote: >>>>>> >>>>>> Can I manually download the systemvm template from here? >>>>>> http://download.cloudstack.org/systemvm/4.11/ >>>>>> <http://download.cloudstack.org/systemvm/4.11/> >>>>>> >>>>>> Then manually overwrite it in the filesystem and update it accordingly >>>>>> in the database? >>>>>> >>>>>> Asai >>>>>> >>>>>> >>>>>>> On Aug 21, 2018, at 1:40 PM, Asai wrote: >>>>>>> >>>>>>> 4.11.0 >>>>>>> >>>>>>> As outlined in this >>>>>>> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html >>>>>>> >>>>>>> <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >>>>>>>> On Aug 21, 2018, at 1:37 PM, ilya musayev >>>>>>>> wrote: >>>>>>>> >>>>>>>> which template did you use? >>>>>>>> >>>>>>>>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>>>>>>>> >>>&g
Re: 4.9 to 4.11 upgrade broken
Thanks, nearly back up and running. One question, what about the Virtual Router upgrade? What do I do if the upgrade fails on the Virtual Router? Looking for docs on this, but can’t find anything. Thanks for your assistance. Asai > On Aug 21, 2018, at 6:35 PM, Sergey Levitskiy wrote: > > Yes. this should bring you back. > However if you perform what you described in your previous reply + rename > template in CS DB to systemvm-kvm-4.11.1 from what it is now > systemvm-kvm-4.11 you should be able to bring all up as it is. Updating > template image is not enough. > > On 8/21/18, 4:04 PM, "Asai" wrote: > >OK thanks a lot, Sergey, > >That helps. What’s the best method to roll back? Just use yum to roll > back to 4.9 and rebuild the DB from backup? > > >> On Aug 21, 2018, at 3:59 PM, Sergey Levitskiy wrote: >> >> The fastest and easiest way is to rollback both DB and management server and >> start over. You need to have correct systemVM template registered before you >> initiate an upgrade. >> >> Thanks, >> Sergey >> >> >> On 8/21/18, 2:30 PM, "Asai" wrote: >> >> Is there anybody out there that can assist with this? >> >> Asai >> >> >>> On Aug 21, 2018, at 2:01 PM, Asai wrote: >>> >>> Is there any more specific instruction about this? >>> >>> What is the best practice? Should I roll back first? Is there any >>> documentation about rolling back? Do I uninstall cloudstack management and >>> re-install 4.9? >>> >>> Or is it as simple as just overwriting the file? If so, what about the >>> template.properties file and the metadata in there like qcow2.size? >>> >>> filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >>> id=225 >>> qcow2.size=316310016 >>> public=true >>> uniquename=225-2-826a2950-bb8e-34dd-9420-1eb24ea16b4a >>> qcow2.virtualsize=2516582400 >>> virtualsize=2516582400 >>> checksum=2d8d1e4eacc976814b97f02849481433 >>> hvm=true >>> description=systemvm-kvm-4.11 >>> qcow2=true >>> qcow2.filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >>> size=316310016 >>> >>> >>> Asai >>> >>> >>>> On Aug 21, 2018, at 1:56 PM, ilya musayev >>>> wrote: >>>> >>>> yes - please try the proper 4.11 systemvm templates. >>>> >>>>> On Aug 21, 2018, at 1:54 PM, Asai wrote: >>>>> >>>>> Can I manually download the systemvm template from here? >>>>> http://download.cloudstack.org/systemvm/4.11/ >>>>> <http://download.cloudstack.org/systemvm/4.11/> >>>>> >>>>> Then manually overwrite it in the filesystem and update it accordingly in >>>>> the database? >>>>> >>>>> Asai >>>>> >>>>> >>>>>> On Aug 21, 2018, at 1:40 PM, Asai wrote: >>>>>> >>>>>> 4.11.0 >>>>>> >>>>>> As outlined in this >>>>>> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html >>>>>> >>>>>> <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >>>>>>> On Aug 21, 2018, at 1:37 PM, ilya musayev >>>>>>> wrote: >>>>>>> >>>>>>> which template did you use? >>>>>>> >>>>>>>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>>>>>>> >>>>>>>> Greetings, >>>>>>>> >>>>>>>> I just tried to upgrade from 4.9 to 4.11, but it looks like the system >>>>>>>> VM template I downloaded according to the upgrade guide is the wrong >>>>>>>> template. It’s 4.11, but I upgraded to 4.11.1 and I get this error >>>>>>>> message: >>>>>>>> >>>>>>>> Caused by: com.cloud.utils.exception.CloudRuntimeException: >>>>>>>> 4.11.1.0KVM SystemVm template not found. Cannot upgrade system Vms >>>>>>>>at >>>>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >>>>>>>>at >>>>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >>>>>>>>at >>>>>>>> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >>>>>>>>... 53 more >>>>>>>> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >>>>>>>> (logid:) Started >>>>>>>> o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >>>>>>>> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >>>>>>>> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >>>>>>>> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) >>>>>>>> Started @20725ms >>>>>>>> >>>>>>>> Can anyone assist with getting this corrected? >>>>>>>> >>>>>>>> Thank you, >>>>>>>> Asai >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >> >> > > >
Re: 4.9 to 4.11 upgrade broken
OK thanks a lot, Sergey, That helps. What’s the best method to roll back? Just use yum to roll back to 4.9 and rebuild the DB from backup? > On Aug 21, 2018, at 3:59 PM, Sergey Levitskiy wrote: > > The fastest and easiest way is to rollback both DB and management server and > start over. You need to have correct systemVM template registered before you > initiate an upgrade. > > Thanks, > Sergey > > > On 8/21/18, 2:30 PM, "Asai" wrote: > >Is there anybody out there that can assist with this? > >Asai > > >> On Aug 21, 2018, at 2:01 PM, Asai wrote: >> >> Is there any more specific instruction about this? >> >> What is the best practice? Should I roll back first? Is there any >> documentation about rolling back? Do I uninstall cloudstack management and >> re-install 4.9? >> >> Or is it as simple as just overwriting the file? If so, what about the >> template.properties file and the metadata in there like qcow2.size? >> >> filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >> id=225 >> qcow2.size=316310016 >> public=true >> uniquename=225-2-826a2950-bb8e-34dd-9420-1eb24ea16b4a >> qcow2.virtualsize=2516582400 >> virtualsize=2516582400 >> checksum=2d8d1e4eacc976814b97f02849481433 >> hvm=true >> description=systemvm-kvm-4.11 >> qcow2=true >> qcow2.filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >> size=316310016 >> >> >> Asai >> >> >>> On Aug 21, 2018, at 1:56 PM, ilya musayev >>> wrote: >>> >>> yes - please try the proper 4.11 systemvm templates. >>> >>>> On Aug 21, 2018, at 1:54 PM, Asai wrote: >>>> >>>> Can I manually download the systemvm template from here? >>>> http://download.cloudstack.org/systemvm/4.11/ >>>> <http://download.cloudstack.org/systemvm/4.11/> >>>> >>>> Then manually overwrite it in the filesystem and update it accordingly in >>>> the database? >>>> >>>> Asai >>>> >>>> >>>>> On Aug 21, 2018, at 1:40 PM, Asai wrote: >>>>> >>>>> 4.11.0 >>>>> >>>>> As outlined in this >>>>> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html >>>>> >>>>> <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >>>>>> On Aug 21, 2018, at 1:37 PM, ilya musayev >>>>>> wrote: >>>>>> >>>>>> which template did you use? >>>>>> >>>>>>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>>>>>> >>>>>>> Greetings, >>>>>>> >>>>>>> I just tried to upgrade from 4.9 to 4.11, but it looks like the system >>>>>>> VM template I downloaded according to the upgrade guide is the wrong >>>>>>> template. It’s 4.11, but I upgraded to 4.11.1 and I get this error >>>>>>> message: >>>>>>> >>>>>>> Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM >>>>>>> SystemVm template not found. Cannot upgrade system Vms >>>>>>> at >>>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >>>>>>> at >>>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >>>>>>> at >>>>>>> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >>>>>>> ... 53 more >>>>>>> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >>>>>>> (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >>>>>>> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >>>>>>> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >>>>>>> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) >>>>>>> Started @20725ms >>>>>>> >>>>>>> Can anyone assist with getting this corrected? >>>>>>> >>>>>>> Thank you, >>>>>>> Asai >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > > >
Re: 4.9 to 4.11 upgrade broken
I have downloaded systemvm4.11.1, unzipped it and put in the proper template folder, and renamed it according to the name of the previous template (4.11), then I have updated the template.properties file with the proper size in bytes and virtual size (using qemu-img info to get virtual size). I have also updated these values in the database as well. Basically, I don’t want to pull the trigger on this unless I get a go ahead from someone who knows what they’re doing. I don’t want to mess up our VMs. Thanks for your assistance. > On Aug 21, 2018, at 2:30 PM, Asai wrote: > > Is there anybody out there that can assist with this? > > Asai > > >> On Aug 21, 2018, at 2:01 PM, Asai wrote: >> >> Is there any more specific instruction about this? >> >> What is the best practice? Should I roll back first? Is there any >> documentation about rolling back? Do I uninstall cloudstack management and >> re-install 4.9? >> >> Or is it as simple as just overwriting the file? If so, what about the >> template.properties file and the metadata in there like qcow2.size? >> >> filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >> id=225 >> qcow2.size=316310016 >> public=true >> uniquename=225-2-826a2950-bb8e-34dd-9420-1eb24ea16b4a >> qcow2.virtualsize=2516582400 >> virtualsize=2516582400 >> checksum=2d8d1e4eacc976814b97f02849481433 >> hvm=true >> description=systemvm-kvm-4.11 >> qcow2=true >> qcow2.filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 >> size=316310016 >> >> >> Asai >> >> >>> On Aug 21, 2018, at 1:56 PM, ilya musayev >>> wrote: >>> >>> yes - please try the proper 4.11 systemvm templates. >>> >>>> On Aug 21, 2018, at 1:54 PM, Asai wrote: >>>> >>>> Can I manually download the systemvm template from here? >>>> http://download.cloudstack.org/systemvm/4.11/ >>>> <http://download.cloudstack.org/systemvm/4.11/> >>>> >>>> Then manually overwrite it in the filesystem and update it accordingly in >>>> the database? >>>> >>>> Asai >>>> >>>> >>>>> On Aug 21, 2018, at 1:40 PM, Asai wrote: >>>>> >>>>> 4.11.0 >>>>> >>>>> As outlined in this >>>>> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html >>>>> >>>>> <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >>>>>> On Aug 21, 2018, at 1:37 PM, ilya musayev >>>>>> wrote: >>>>>> >>>>>> which template did you use? >>>>>> >>>>>>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>>>>>> >>>>>>> Greetings, >>>>>>> >>>>>>> I just tried to upgrade from 4.9 to 4.11, but it looks like the system >>>>>>> VM template I downloaded according to the upgrade guide is the wrong >>>>>>> template. It’s 4.11, but I upgraded to 4.11.1 and I get this error >>>>>>> message: >>>>>>> >>>>>>> Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM >>>>>>> SystemVm template not found. Cannot upgrade system Vms >>>>>>> at >>>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >>>>>>> at >>>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >>>>>>> at >>>>>>> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >>>>>>> ... 53 more >>>>>>> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >>>>>>> (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >>>>>>> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >>>>>>> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >>>>>>> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) >>>>>>> Started @20725ms >>>>>>> >>>>>>> Can anyone assist with getting this corrected? >>>>>>> >>>>>>> Thank you, >>>>>>> Asai >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >
Re: 4.9 to 4.11 upgrade broken
Is there anybody out there that can assist with this? Asai > On Aug 21, 2018, at 2:01 PM, Asai wrote: > > Is there any more specific instruction about this? > > What is the best practice? Should I roll back first? Is there any > documentation about rolling back? Do I uninstall cloudstack management and > re-install 4.9? > > Or is it as simple as just overwriting the file? If so, what about the > template.properties file and the metadata in there like qcow2.size? > > filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 > id=225 > qcow2.size=316310016 > public=true > uniquename=225-2-826a2950-bb8e-34dd-9420-1eb24ea16b4a > qcow2.virtualsize=2516582400 > virtualsize=2516582400 > checksum=2d8d1e4eacc976814b97f02849481433 > hvm=true > description=systemvm-kvm-4.11 > qcow2=true > qcow2.filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 > size=316310016 > > > Asai > > >> On Aug 21, 2018, at 1:56 PM, ilya musayev >> wrote: >> >> yes - please try the proper 4.11 systemvm templates. >> >>> On Aug 21, 2018, at 1:54 PM, Asai wrote: >>> >>> Can I manually download the systemvm template from here? >>> http://download.cloudstack.org/systemvm/4.11/ >>> <http://download.cloudstack.org/systemvm/4.11/> >>> >>> Then manually overwrite it in the filesystem and update it accordingly in >>> the database? >>> >>> Asai >>> >>> >>>> On Aug 21, 2018, at 1:40 PM, Asai wrote: >>>> >>>> 4.11.0 >>>> >>>> As outlined in this >>>> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html >>>> >>>> <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >>>>> On Aug 21, 2018, at 1:37 PM, ilya musayev >>>>> wrote: >>>>> >>>>> which template did you use? >>>>> >>>>>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>>>>> >>>>>> Greetings, >>>>>> >>>>>> I just tried to upgrade from 4.9 to 4.11, but it looks like the system >>>>>> VM template I downloaded according to the upgrade guide is the wrong >>>>>> template. It’s 4.11, but I upgraded to 4.11.1 and I get this error >>>>>> message: >>>>>> >>>>>> Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM >>>>>> SystemVm template not found. Cannot upgrade system Vms >>>>>> at >>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >>>>>> at >>>>>> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >>>>>> at >>>>>> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >>>>>> ... 53 more >>>>>> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >>>>>> (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >>>>>> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >>>>>> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >>>>>> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) >>>>>> Started @20725ms >>>>>> >>>>>> Can anyone assist with getting this corrected? >>>>>> >>>>>> Thank you, >>>>>> Asai >>>>>> >>>>>> >>>>> >>>> >>> >> >
Re: 4.9 to 4.11 upgrade broken
Is there any more specific instruction about this? What is the best practice? Should I roll back first? Is there any documentation about rolling back? Do I uninstall cloudstack management and re-install 4.9? Or is it as simple as just overwriting the file? If so, what about the template.properties file and the metadata in there like qcow2.size? filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 id=225 qcow2.size=316310016 public=true uniquename=225-2-826a2950-bb8e-34dd-9420-1eb24ea16b4a qcow2.virtualsize=2516582400 virtualsize=2516582400 checksum=2d8d1e4eacc976814b97f02849481433 hvm=true description=systemvm-kvm-4.11 qcow2=true qcow2.filename=9cebb971-8605-3493-86f3-f5d1aef1715e.qcow2 size=316310016 Asai > On Aug 21, 2018, at 1:56 PM, ilya musayev > wrote: > > yes - please try the proper 4.11 systemvm templates. > >> On Aug 21, 2018, at 1:54 PM, Asai wrote: >> >> Can I manually download the systemvm template from here? >> http://download.cloudstack.org/systemvm/4.11/ >> <http://download.cloudstack.org/systemvm/4.11/> >> >> Then manually overwrite it in the filesystem and update it accordingly in >> the database? >> >> Asai >> >> >>> On Aug 21, 2018, at 1:40 PM, Asai wrote: >>> >>> 4.11.0 >>> >>> As outlined in this >>> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html >>> >>> <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >>>> On Aug 21, 2018, at 1:37 PM, ilya musayev >>>> wrote: >>>> >>>> which template did you use? >>>> >>>>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>>>> >>>>> Greetings, >>>>> >>>>> I just tried to upgrade from 4.9 to 4.11, but it looks like the system VM >>>>> template I downloaded according to the upgrade guide is the wrong >>>>> template. It’s 4.11, but I upgraded to 4.11.1 and I get this error >>>>> message: >>>>> >>>>> Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM >>>>> SystemVm template not found. Cannot upgrade system Vms >>>>> at >>>>> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >>>>> at >>>>> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >>>>> at >>>>> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >>>>> ... 53 more >>>>> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >>>>> (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >>>>> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >>>>> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >>>>> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) >>>>> Started @20725ms >>>>> >>>>> Can anyone assist with getting this corrected? >>>>> >>>>> Thank you, >>>>> Asai >>>>> >>>>> >>>> >>> >> >
Re: 4.9 to 4.11 upgrade broken
Can I manually download the systemvm template from here? http://download.cloudstack.org/systemvm/4.11/ <http://download.cloudstack.org/systemvm/4.11/> Then manually overwrite it in the filesystem and update it accordingly in the database? Asai > On Aug 21, 2018, at 1:40 PM, Asai wrote: > > 4.11.0 > > As outlined in this > http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html > > <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> >> On Aug 21, 2018, at 1:37 PM, ilya musayev >> wrote: >> >> which template did you use? >> >>> On Aug 21, 2018, at 1:36 PM, Asai wrote: >>> >>> Greetings, >>> >>> I just tried to upgrade from 4.9 to 4.11, but it looks like the system VM >>> template I downloaded according to the upgrade guide is the wrong template. >>> It’s 4.11, but I upgraded to 4.11.1 and I get this error message: >>> >>> Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM >>> SystemVm template not found. Cannot upgrade system Vms >>> at >>> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >>> at >>> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >>> at >>> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >>> ... 53 more >>> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >>> (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >>> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >>> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >>> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) Started >>> @20725ms >>> >>> Can anyone assist with getting this corrected? >>> >>> Thank you, >>> Asai >>> >>> >> >
Re: 4.9 to 4.11 upgrade broken
4.11.0 As outlined in this http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html <http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html> > On Aug 21, 2018, at 1:37 PM, ilya musayev > wrote: > > which template did you use? > >> On Aug 21, 2018, at 1:36 PM, Asai wrote: >> >> Greetings, >> >> I just tried to upgrade from 4.9 to 4.11, but it looks like the system VM >> template I downloaded according to the upgrade guide is the wrong template. >> It’s 4.11, but I upgraded to 4.11.1 and I get this error message: >> >> Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM >> SystemVm template not found. Cannot upgrade system Vms >> at >> com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) >> at >> com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) >> at >> com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) >> ... 53 more >> 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) >> (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} >> 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) >> (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} >> 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) Started >> @20725ms >> >> Can anyone assist with getting this corrected? >> >> Thank you, >> Asai >> >> >
4.9 to 4.11 upgrade broken
Greetings, I just tried to upgrade from 4.9 to 4.11, but it looks like the system VM template I downloaded according to the upgrade guide is the wrong template. It’s 4.11, but I upgraded to 4.11.1 and I get this error message: Caused by: com.cloud.utils.exception.CloudRuntimeException: 4.11.1.0KVM SystemVm template not found. Cannot upgrade system Vms at com.cloud.upgrade.dao.Upgrade41100to41110.updateSystemVmTemplates(Upgrade41100to41110.java:281) at com.cloud.upgrade.dao.Upgrade41100to41110.performDataMigration(Upgrade41100to41110.java:68) at com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:578) ... 53 more 2018-08-21 13:28:25,257 INFO [o.e.j.s.h.ContextHandler] (main:null) (logid:) Started o.e.j.s.h.MovedContextHandler@15bfd87{/,null,AVAILABLE} 2018-08-21 13:28:25,317 INFO [o.e.j.s.AbstractConnector] (main:null) (logid:) Started ServerConnector@4b1c1ea0{HTTP/1.1,[http/1.1]}{:::8080} 2018-08-21 13:28:25,318 INFO [o.e.j.s.Server] (main:null) (logid:) Started @20725ms Can anyone assist with getting this corrected? Thank you, Asai
Re: Upgrading from 4.9 to 4.10
Another question on this subject, our secondary storage is throwing alerts for low storage and it seems like I can’t upload anything to it at this point. Can I change any settings to allow continued use of secondary storage even when storage is low, since we’re only talking about a few hundred megabytes here? Thanks, Asai > On Aug 15, 2018, at 9:17 AM, Asai wrote: > > OK, thanks for that advice. > > I found out the problem. It was lack of storage. > Asai > > >> On Aug 15, 2018, at 9:14 AM, ilya musayev >> wrote: >> >> +1 on 4.11 - it’s LTS release and got much more attention >> >> On Wed, Aug 15, 2018 at 9:13 AM Dag Sonstebo >> wrote: >> >>> Asai, >>> >>> First of all I strongly advise you to upgrade to 4.11.1 instead of 4.10 – >>> this will cause you a lot less pain. >>> >>> With regards to the template upload in 4.9 – do template uploads normally >>> work? I’d suggest you check through the management-server.log and cloud.log >>> on the SSVM to troubleshoot further. Also maybe destroy the SSVM and let >>> this recreate, just in case it’s not healthy. >>> >>> Regards, >>> Dag Sonstebo >>> Cloud Architect >>> ShapeBlue >>> >>> On 15/08/2018, 17:09, "Asai" wrote: >>> >>> Greetings, >>> >>> We’re attempting an upgrade from 4.9 to 4.10, but we cannot seem to >>> get past the SystemVM 4.10 download stage. When registering a new template >>> according to the documentation, the newly created systemvm-4.10 never >>> enters the ready state. I have tried downloading from the repository as >>> well as uploading the systemvm from my local computer but it never seems to >>> complete, and we cannot move forward. >>> >>> Can anyone share any insights into this problem? >>> Asai >>> >>> >>> dag.sonst...@shapeblue.com >>> www.shapeblue.com >>> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >>> @shapeblue >>> >>> >>> >>> >
Re: Autostarting VMs on KVM?
Thanks, Eric, Do they have to be already created as HA instances? Can you turn on HA after the fact? Also, what if it’s only one standalone server with no failover? Asai > On Aug 15, 2018, at 1:39 PM, Eric Lee Green wrote: > > If you set the offering to allow HA and create the instances as HA instances, > they will autostart once the management server figures out they're really > dead (either because it used STONITH to kill the unreachable node, or because > that node became reachable again). When I had to reboot my cluster due to a > massive network failure (critical 10 gigabit switch croaked, had to slide a > new one in), all the instances marked "HA" came back up all by themselves > without me having to do anything about it. > > On 8/15/18 09:11, Asai wrote: >> Thanks, Dag, >> >> Looks like scripting it is the way to go. >> Asai >> >> >>> On Aug 15, 2018, at 9:06 AM, Dag Sonstebo >>> wrote: >>> >>> Hi Asai, >>> >>> In short – no that is not a use case CloudStack is designed for, the VM >>> states are controlled by CloudStack management. You should however look at >>> using HA service offerings and host HA (if you meet all the >>> pre-requisites). Between these mechanisms VMs can be brought up on other >>> hosts if a host goes down. >>> >>> Alternatively if you are looking to trigger an automated startup of VMs I >>> suggest you simply script this with e.g. cloudmonkey. Keep in mind this >>> still requires a healthy management server though. >>> >>> Regards, >>> Dag Sonstebo >>> Cloud Architect >>> ShapeBlue >>> >>> On 15/08/2018, 16:47, "Asai" wrote: >>> >>>Thanks, Dag, >>> >>>On boot of the server, I would like the VMs to start up automatically, >>> rather than me having to go to the management console and start them >>> manually. We suffered some downtime and in restarting the hardware, I had >>> to manually get everything back up and running. >>>Asai >>> >>> >>> >>> dag.sonst...@shapeblue.com >>> www.shapeblue.com >>> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >>> @shapeblue >>> >>> >>> >>>> On Aug 15, 2018, at 1:22 AM, Dag Sonstebo >>>> wrote: >>>> >>>> Hi Asai, >>>> >>>> Can you explain a bit more what you are trying to achieve? Everything in >>>> CloudStack is controlled by the management server, not the KVM host, and >>>> in general the assumption is a KVM host is always online. >>>> >>>> Regards, >>>> Dag Sonstebo >>>> Cloud Architect >>>> ShapeBlue >>>> >>>> On 15/08/2018, 03:38, "Asai" wrote: >>>> >>>> Greetings, >>>> >>>> Can anyone offer advice on how to autostart VMs at boot time using KVM? >>>> There doesn’t seem to be any documentation for this in the CS docs. We’re >>>> on CS 4.9.2.0. >>>> >>>> I tried doing it with virsh autostart, but it just throws an error. >>>> >>>> Thank you, >>>> Asai >>>> >>>> >>>> >>>> dag.sonst...@shapeblue.com >>>> www.shapeblue.com >>>> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >>>> @shapeblue >>>> >>>> >>>> >>> >>> >> >
Re: Upgrading from 4.9 to 4.10
OK, thanks for that advice. I found out the problem. It was lack of storage. Asai > On Aug 15, 2018, at 9:14 AM, ilya musayev > wrote: > > +1 on 4.11 - it’s LTS release and got much more attention > > On Wed, Aug 15, 2018 at 9:13 AM Dag Sonstebo > wrote: > >> Asai, >> >> First of all I strongly advise you to upgrade to 4.11.1 instead of 4.10 – >> this will cause you a lot less pain. >> >> With regards to the template upload in 4.9 – do template uploads normally >> work? I’d suggest you check through the management-server.log and cloud.log >> on the SSVM to troubleshoot further. Also maybe destroy the SSVM and let >> this recreate, just in case it’s not healthy. >> >> Regards, >> Dag Sonstebo >> Cloud Architect >> ShapeBlue >> >> On 15/08/2018, 17:09, "Asai" wrote: >> >>Greetings, >> >>We’re attempting an upgrade from 4.9 to 4.10, but we cannot seem to >> get past the SystemVM 4.10 download stage. When registering a new template >> according to the documentation, the newly created systemvm-4.10 never >> enters the ready state. I have tried downloading from the repository as >> well as uploading the systemvm from my local computer but it never seems to >> complete, and we cannot move forward. >> >>Can anyone share any insights into this problem? >>Asai >> >> >> dag.sonst...@shapeblue.com >> www.shapeblue.com >> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >> @shapeblue >> >> >> >>
Re: Autostarting VMs on KVM?
Thanks, Dag, Looks like scripting it is the way to go. Asai > On Aug 15, 2018, at 9:06 AM, Dag Sonstebo wrote: > > Hi Asai, > > In short – no that is not a use case CloudStack is designed for, the VM > states are controlled by CloudStack management. You should however look at > using HA service offerings and host HA (if you meet all the pre-requisites). > Between these mechanisms VMs can be brought up on other hosts if a host goes > down. > > Alternatively if you are looking to trigger an automated startup of VMs I > suggest you simply script this with e.g. cloudmonkey. Keep in mind this still > requires a healthy management server though. > > Regards, > Dag Sonstebo > Cloud Architect > ShapeBlue > > On 15/08/2018, 16:47, "Asai" wrote: > >Thanks, Dag, > >On boot of the server, I would like the VMs to start up automatically, > rather than me having to go to the management console and start them > manually. We suffered some downtime and in restarting the hardware, I had to > manually get everything back up and running. >Asai > > > > dag.sonst...@shapeblue.com > www.shapeblue.com > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > > >> On Aug 15, 2018, at 1:22 AM, Dag Sonstebo wrote: >> >> Hi Asai, >> >> Can you explain a bit more what you are trying to achieve? Everything in >> CloudStack is controlled by the management server, not the KVM host, and in >> general the assumption is a KVM host is always online. >> >> Regards, >> Dag Sonstebo >> Cloud Architect >> ShapeBlue >> >> On 15/08/2018, 03:38, "Asai" wrote: >> >> Greetings, >> >> Can anyone offer advice on how to autostart VMs at boot time using KVM? >> There doesn’t seem to be any documentation for this in the CS docs. We’re >> on CS 4.9.2.0. >> >> I tried doing it with virsh autostart, but it just throws an error. >> >> Thank you, >> Asai >> >> >> >> dag.sonst...@shapeblue.com >> www.shapeblue.com >> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >> @shapeblue >> >> >> > > >
Upgrading from 4.9 to 4.10
Greetings, We’re attempting an upgrade from 4.9 to 4.10, but we cannot seem to get past the SystemVM 4.10 download stage. When registering a new template according to the documentation, the newly created systemvm-4.10 never enters the ready state. I have tried downloading from the repository as well as uploading the systemvm from my local computer but it never seems to complete, and we cannot move forward. Can anyone share any insights into this problem? Asai
Re: Autostarting VMs on KVM?
Thanks, Dag, On boot of the server, I would like the VMs to start up automatically, rather than me having to go to the management console and start them manually. We suffered some downtime and in restarting the hardware, I had to manually get everything back up and running. Asai > On Aug 15, 2018, at 1:22 AM, Dag Sonstebo wrote: > > Hi Asai, > > Can you explain a bit more what you are trying to achieve? Everything in > CloudStack is controlled by the management server, not the KVM host, and in > general the assumption is a KVM host is always online. > > Regards, > Dag Sonstebo > Cloud Architect > ShapeBlue > > On 15/08/2018, 03:38, "Asai" wrote: > >Greetings, > >Can anyone offer advice on how to autostart VMs at boot time using KVM? > There doesn’t seem to be any documentation for this in the CS docs. We’re on > CS 4.9.2.0. > >I tried doing it with virsh autostart, but it just throws an error. > >Thank you, >Asai > > > > dag.sonst...@shapeblue.com > www.shapeblue.com > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > >
Autostarting VMs on KVM?
Greetings, Can anyone offer advice on how to autostart VMs at boot time using KVM? There doesn’t seem to be any documentation for this in the CS docs. We’re on CS 4.9.2.0. I tried doing it with virsh autostart, but it just throws an error. Thank you, Asai
Re: Unable to locate datastore with id
Yes, I had two servers running in the cluster and each had a primary storage. I removed one of them, but before removing, I migrated all the VMs and Volumes on that server to the other one. Asai > On Aug 25, 2017, at 10:57 AM, Rafael Weingärtner > <rafaelweingart...@gmail.com> wrote: > > By migrated, you mean moved the VM's disk to another storage that was > already connected in the cluster where the vm was running? > > On Fri, Aug 25, 2017 at 1:13 PM, Asai <a...@globalchangemusic.org> wrote: > >> I migrated the VM to another Primary Storage and Host, then removed the >> primary storage that was no longer in use from the Infrastructure. >> Asai >> Network and Systems Administrator >> GLOBAL CHANGE MEDIA >> office: 520.398.2542 >> http://globalchange.media >> Tucson, AZ >> >>> On Aug 25, 2017, at 7:10 AM, Rafael Weingärtner < >> rafaelweingart...@gmail.com> wrote: >>> >>> I did not understand. You removed a primary storage, how do you still >> have >>> a VM running that is from this deleted primary storage? >>> What did you do? what was the process to remove a storage? >>> >>> On Thu, Aug 24, 2017 at 1:57 PM, Asai <a...@globalchangemusic.org> >> wrote: >>> >>>> Thanks, >>>> >>>> The next problem seems to be now, that there was an original snapshot >> that >>>> is no longer there. When I try to snapshot the volume I’m working on, >> it >>>> fails, because it’s looking for an initial snapshot that the database >> says >>>> is still there, but which was actually removed when I removed the other >>>> Primary Storage volume. Does that make sense? What do I need to >> change in >>>> the database for that volume to be able to snapshot? >>>> Asai >>>> >>>> >>>>> On Aug 24, 2017, at 1:53 PM, Rafael Weingärtner < >>>> rafaelweingart...@gmail.com> wrote: >>>>> >>>>> Do not remove (delete), to remove you can mark the flags. First set the >>>>> removed date flag and then the state as Destroyed. >>>>> >>>>> On Thu, Aug 24, 2017 at 1:10 PM, Asai <a...@globalchangemusic.org> >>>> wrote: >>>>> >>>>>> I the DB table snapshot_store_ref I see two snapshots listed with >>>> store_id >>>>>> 3. Can I safely remove those rows? >>>>>> Asai >>>>>> Network and Systems Administrator >>>>>> GLOBAL CHANGE MEDIA >>>>>> office: 520.398.2542 >>>>>> http://globalchange.media >>>>>> Tucson, AZ >>>>>> >>>>>>> On Aug 24, 2017, at 1:06 PM, Asai <a...@globalchangemusic.org> >> wrote: >>>>>>> >>>>>>> I can see now that id 3 refers to a primary storage that I had to >>>> remove >>>>>> a while ago. It’s still in the DB, though, and seems to be causing >> the >>>>>> error. What steps should I take to remove this reference completely >>>> from >>>>>> the DB? >>>>>>> Asai >>>>>>> >>>>>>> >>>>>>>> On Aug 24, 2017, at 11:20 AM, Gabriel Beims Bräscher < >>>>>> gabrasc...@gmail.com> wrote: >>>>>>>> >>>>>>>> Just adding to Rafael's comment. Constant database backup is also a >>>>>> great >>>>>>>> idea. >>>>>>>> >>>>>>>> 2017-08-24 15:19 GMT-03:00 Rafael Weingärtner < >>>>>> rafaelweingart...@gmail.com>: >>>>>>>> >>>>>>>>> I would suggest you taking quite a lot of care before executing >>>>>> anything in >>>>>>>>> the database. >>>>>>>>> Please, do not hesitate to ask for further assistance here. >>>>>>>>> >>>>>>>>> On Thu, Aug 24, 2017 at 2:15 PM, Asai <a...@globalchangemusic.org> >>>>>> wrote: >>>>>>>>> >>>>>>>>>> Thank you very much for the assistance. I will try that. >>>>>>>>>> Asai >>>>>>>>>>> On Aug 24, 2017, at 11:12 AM, Rafael Weingärtner < >>>>>>>>>> rafaelweingart...@gmail.com> wrote: &g
Re: Unable to locate datastore with id
Thanks, The next problem seems to be now, that there was an original snapshot that is no longer there. When I try to snapshot the volume I’m working on, it fails, because it’s looking for an initial snapshot that the database says is still there, but which was actually removed when I removed the other Primary Storage volume. Does that make sense? What do I need to change in the database for that volume to be able to snapshot? Asai > On Aug 24, 2017, at 1:53 PM, Rafael Weingärtner <rafaelweingart...@gmail.com> > wrote: > > Do not remove (delete), to remove you can mark the flags. First set the > removed date flag and then the state as Destroyed. > > On Thu, Aug 24, 2017 at 1:10 PM, Asai <a...@globalchangemusic.org> wrote: > >> I the DB table snapshot_store_ref I see two snapshots listed with store_id >> 3. Can I safely remove those rows? >> Asai >> Network and Systems Administrator >> GLOBAL CHANGE MEDIA >> office: 520.398.2542 >> http://globalchange.media >> Tucson, AZ >> >>> On Aug 24, 2017, at 1:06 PM, Asai <a...@globalchangemusic.org> wrote: >>> >>> I can see now that id 3 refers to a primary storage that I had to remove >> a while ago. It’s still in the DB, though, and seems to be causing the >> error. What steps should I take to remove this reference completely from >> the DB? >>> Asai >>> >>> >>>> On Aug 24, 2017, at 11:20 AM, Gabriel Beims Bräscher < >> gabrasc...@gmail.com> wrote: >>>> >>>> Just adding to Rafael's comment. Constant database backup is also a >> great >>>> idea. >>>> >>>> 2017-08-24 15:19 GMT-03:00 Rafael Weingärtner < >> rafaelweingart...@gmail.com>: >>>> >>>>> I would suggest you taking quite a lot of care before executing >> anything in >>>>> the database. >>>>> Please, do not hesitate to ask for further assistance here. >>>>> >>>>> On Thu, Aug 24, 2017 at 2:15 PM, Asai <a...@globalchangemusic.org> >> wrote: >>>>> >>>>>> Thank you very much for the assistance. I will try that. >>>>>> Asai >>>>>>> On Aug 24, 2017, at 11:12 AM, Rafael Weingärtner < >>>>>> rafaelweingart...@gmail.com> wrote: >>>>>>> >>>>>>> Yes, quite easily. >>>>>>> I do not know if your problem is the same (you need a human not >> paying >>>>>> much >>>>>>> attention to cause this type of problem), but basically, you can >> check >>>>> in >>>>>>> the database what is the data store with id = 3, and then the volumes >>>>> of >>>>>>> snapshots that are allocated in this data store, and then you can >>>>> remove >>>>>>> them manually setting the flags. >>>>>>> >>>>>>> >>>>>>> On Thu, Aug 24, 2017 at 2:09 PM, Asai <a...@globalchangemusic.org> >>>>>> wrote: >>>>>>> >>>>>>>> Do you recall if it was able to be fixed? >>>>>>>> Asai >>>>>>>> >>>>>>>> >>>>>>>>> On Aug 24, 2017, at 11:03 AM, Rafael Weingärtner < >>>>>>>> rafaelweingart...@gmail.com> wrote: >>>>>>>>> >>>>>>>>> I have seen this issue before. In the environment I noticed it, it >>>>> was >>>>>>>>> caused by someone that manually deleted a volume in the database in >>>>>> order >>>>>>>>> to remove a data store, but the snapshot that was using that volume >>>>> was >>>>>>>> not >>>>>>>>> removed. Then, the data store was removed. By delete here I mean >>>>>> setting >>>>>>>>> the flag "removed" in the database to some data and the "state" to >>>>>>>>> destroyed. >>>>>>>>> >>>>>>>>> On Thu, Aug 24, 2017 at 1:45 PM, Asai <a...@globalchangemusic.org> >>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Greetings, >>>>>>>>>> >>>>>>>>>> I was browsing to the Snapshots section under “Storage” today and >>>>> came >>>>>>>> up >>>>>>>>>> with this error: >>>>>>>>>> >>>>>>>>>> Unable to locate datastore with id 3 >>>>>>>>>> >>>>>>>>>> I am unable to figure this out. I went to the Secondary Storage >>>>>> server >>>>>>>>>> and it looks like all the snapshots are there from 2 days ago. >> Can >>>>>>>> someone >>>>>>>>>> please assist me on how to troubleshoot this problem? >>>>>>>>>> Asai >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Rafael Weingärtner >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Rafael Weingärtner >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Rafael Weingärtner >>>>> >>> >> >> > > > -- > Rafael Weingärtner
Re: Unable to locate datastore with id
I the DB table snapshot_store_ref I see two snapshots listed with store_id 3. Can I safely remove those rows? Asai Network and Systems Administrator GLOBAL CHANGE MEDIA office: 520.398.2542 http://globalchange.media Tucson, AZ > On Aug 24, 2017, at 1:06 PM, Asai <a...@globalchangemusic.org> wrote: > > I can see now that id 3 refers to a primary storage that I had to remove a > while ago. It’s still in the DB, though, and seems to be causing the error. > What steps should I take to remove this reference completely from the DB? > Asai > > >> On Aug 24, 2017, at 11:20 AM, Gabriel Beims Bräscher <gabrasc...@gmail.com> >> wrote: >> >> Just adding to Rafael's comment. Constant database backup is also a great >> idea. >> >> 2017-08-24 15:19 GMT-03:00 Rafael Weingärtner <rafaelweingart...@gmail.com>: >> >>> I would suggest you taking quite a lot of care before executing anything in >>> the database. >>> Please, do not hesitate to ask for further assistance here. >>> >>> On Thu, Aug 24, 2017 at 2:15 PM, Asai <a...@globalchangemusic.org> wrote: >>> >>>> Thank you very much for the assistance. I will try that. >>>> Asai >>>>> On Aug 24, 2017, at 11:12 AM, Rafael Weingärtner < >>>> rafaelweingart...@gmail.com> wrote: >>>>> >>>>> Yes, quite easily. >>>>> I do not know if your problem is the same (you need a human not paying >>>> much >>>>> attention to cause this type of problem), but basically, you can check >>> in >>>>> the database what is the data store with id = 3, and then the volumes >>> of >>>>> snapshots that are allocated in this data store, and then you can >>> remove >>>>> them manually setting the flags. >>>>> >>>>> >>>>> On Thu, Aug 24, 2017 at 2:09 PM, Asai <a...@globalchangemusic.org> >>>> wrote: >>>>> >>>>>> Do you recall if it was able to be fixed? >>>>>> Asai >>>>>> >>>>>> >>>>>>> On Aug 24, 2017, at 11:03 AM, Rafael Weingärtner < >>>>>> rafaelweingart...@gmail.com> wrote: >>>>>>> >>>>>>> I have seen this issue before. In the environment I noticed it, it >>> was >>>>>>> caused by someone that manually deleted a volume in the database in >>>> order >>>>>>> to remove a data store, but the snapshot that was using that volume >>> was >>>>>> not >>>>>>> removed. Then, the data store was removed. By delete here I mean >>>> setting >>>>>>> the flag "removed" in the database to some data and the "state" to >>>>>>> destroyed. >>>>>>> >>>>>>> On Thu, Aug 24, 2017 at 1:45 PM, Asai <a...@globalchangemusic.org> >>>>>> wrote: >>>>>>> >>>>>>>> Greetings, >>>>>>>> >>>>>>>> I was browsing to the Snapshots section under “Storage” today and >>> came >>>>>> up >>>>>>>> with this error: >>>>>>>> >>>>>>>> Unable to locate datastore with id 3 >>>>>>>> >>>>>>>> I am unable to figure this out. I went to the Secondary Storage >>>> server >>>>>>>> and it looks like all the snapshots are there from 2 days ago. Can >>>>>> someone >>>>>>>> please assist me on how to troubleshoot this problem? >>>>>>>> Asai >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Rafael Weingärtner >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Rafael Weingärtner >>>> >>>> >>> >>> >>> -- >>> Rafael Weingärtner >>> >
Re: Unable to locate datastore with id
I can see now that id 3 refers to a primary storage that I had to remove a while ago. It’s still in the DB, though, and seems to be causing the error. What steps should I take to remove this reference completely from the DB? Asai > On Aug 24, 2017, at 11:20 AM, Gabriel Beims Bräscher <gabrasc...@gmail.com> > wrote: > > Just adding to Rafael's comment. Constant database backup is also a great > idea. > > 2017-08-24 15:19 GMT-03:00 Rafael Weingärtner <rafaelweingart...@gmail.com>: > >> I would suggest you taking quite a lot of care before executing anything in >> the database. >> Please, do not hesitate to ask for further assistance here. >> >> On Thu, Aug 24, 2017 at 2:15 PM, Asai <a...@globalchangemusic.org> wrote: >> >>> Thank you very much for the assistance. I will try that. >>> Asai >>>> On Aug 24, 2017, at 11:12 AM, Rafael Weingärtner < >>> rafaelweingart...@gmail.com> wrote: >>>> >>>> Yes, quite easily. >>>> I do not know if your problem is the same (you need a human not paying >>> much >>>> attention to cause this type of problem), but basically, you can check >> in >>>> the database what is the data store with id = 3, and then the volumes >> of >>>> snapshots that are allocated in this data store, and then you can >> remove >>>> them manually setting the flags. >>>> >>>> >>>> On Thu, Aug 24, 2017 at 2:09 PM, Asai <a...@globalchangemusic.org> >>> wrote: >>>> >>>>> Do you recall if it was able to be fixed? >>>>> Asai >>>>> >>>>> >>>>>> On Aug 24, 2017, at 11:03 AM, Rafael Weingärtner < >>>>> rafaelweingart...@gmail.com> wrote: >>>>>> >>>>>> I have seen this issue before. In the environment I noticed it, it >> was >>>>>> caused by someone that manually deleted a volume in the database in >>> order >>>>>> to remove a data store, but the snapshot that was using that volume >> was >>>>> not >>>>>> removed. Then, the data store was removed. By delete here I mean >>> setting >>>>>> the flag "removed" in the database to some data and the "state" to >>>>>> destroyed. >>>>>> >>>>>> On Thu, Aug 24, 2017 at 1:45 PM, Asai <a...@globalchangemusic.org> >>>>> wrote: >>>>>> >>>>>>> Greetings, >>>>>>> >>>>>>> I was browsing to the Snapshots section under “Storage” today and >> came >>>>> up >>>>>>> with this error: >>>>>>> >>>>>>> Unable to locate datastore with id 3 >>>>>>> >>>>>>> I am unable to figure this out. I went to the Secondary Storage >>> server >>>>>>> and it looks like all the snapshots are there from 2 days ago. Can >>>>> someone >>>>>>> please assist me on how to troubleshoot this problem? >>>>>>> Asai >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Rafael Weingärtner >>>>> >>>>> >>>> >>>> >>>> -- >>>> Rafael Weingärtner >>> >>> >> >> >> -- >> Rafael Weingärtner >>
Re: Unable to locate datastore with id
Thank you very much for the assistance. I will try that. Asai > On Aug 24, 2017, at 11:12 AM, Rafael Weingärtner > <rafaelweingart...@gmail.com> wrote: > > Yes, quite easily. > I do not know if your problem is the same (you need a human not paying much > attention to cause this type of problem), but basically, you can check in > the database what is the data store with id = 3, and then the volumes of > snapshots that are allocated in this data store, and then you can remove > them manually setting the flags. > > > On Thu, Aug 24, 2017 at 2:09 PM, Asai <a...@globalchangemusic.org> wrote: > >> Do you recall if it was able to be fixed? >> Asai >> >> >>> On Aug 24, 2017, at 11:03 AM, Rafael Weingärtner < >> rafaelweingart...@gmail.com> wrote: >>> >>> I have seen this issue before. In the environment I noticed it, it was >>> caused by someone that manually deleted a volume in the database in order >>> to remove a data store, but the snapshot that was using that volume was >> not >>> removed. Then, the data store was removed. By delete here I mean setting >>> the flag "removed" in the database to some data and the "state" to >>> destroyed. >>> >>> On Thu, Aug 24, 2017 at 1:45 PM, Asai <a...@globalchangemusic.org> >> wrote: >>> >>>> Greetings, >>>> >>>> I was browsing to the Snapshots section under “Storage” today and came >> up >>>> with this error: >>>> >>>> Unable to locate datastore with id 3 >>>> >>>> I am unable to figure this out. I went to the Secondary Storage server >>>> and it looks like all the snapshots are there from 2 days ago. Can >> someone >>>> please assist me on how to troubleshoot this problem? >>>> Asai >>>> >>>> >>>> >>> >>> >>> -- >>> Rafael Weingärtner >> >> > > > -- > Rafael Weingärtner
Re: KVM VM Snapshots
Thank you, Simon, for the advice. Asai > On Aug 4, 2017, at 11:31 AM, Simon Weller <swel...@ena.com.INVALID> wrote: > > I'd wait until the official release is posted. The work in progress upgrade > docs are here: > http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.10/upgrade/upgrade-4.9.html > > Upgrading the agent won't take your VMs down. 4.10 does require an upgrade > for all virtual routers though, so this upgrade will be ,pre involved than > the others recently (the last virtual router upgrade was 4.6). > > > ____ > From: Asai <a...@globalchangemusic.org> > Sent: Friday, August 4, 2017 1:27 PM > To: users@cloudstack.apache.org > Subject: Re: KVM VM Snapshots > > Also, in upgrading Cloudstack from 4.9 to 4.10, does upgrading the > cloudstack-agent cause any of the running VMs to reboot? > Asai > > >> On Aug 4, 2017, at 10:32 AM, Asai <a...@globalchangemusic.org> wrote: >> >> Are there any upgrade guides from 4.9 to 4.10? I’m looking for them in the >> docs but there doesn’t seem to be anything there yet... >> Asai >> >> >>> On Jul 11, 2017, at 8:21 AM, Rubens Malheiro <rubens.malhe...@gmail.com> >>> wrote: >>> >>> Thank you SI >>> This will be incredible! and revolutionary. >>> >>> Cloudstack WINS! >>> >>> On Tue, Jul 11, 2017 at 11:21 AM, Simon Weller <swel...@ena.com.invalid> >>> wrote: >>> >>>> The new VM Snapshot functionality for KVM supports disk and memory snaps. >>>> This means you can recover a VM to a port in time, so assuming that's what >>>> you are asking, yes it's hot. >>>> >>>> It does rely on QCOW2 disk formats right now. >>>> >>>> >>>> - Si >>>> >>>> >>>> ____ >>>> From: Rubens Malheiro <rubens.malhe...@gmail.com> >>>> Sent: Monday, July 10, 2017 7:44 PM >>>> To: users@cloudstack.apache.org >>>> Subject: Re: KVM VM Snapshots >>>> >>>> Sorry to mess up >>>> But a version 4.10 supported snapshot in KVM will it be hot? >>>> On Mon, 10 Jul 2017 at 21:28 Simon Weller <swel...@ena.com.invalid> wrote: >>>> >>>>> Asai, >>>>> >>>>> 4.10 was approved last week. It should hit the repos with the next few >>>>> days. >>>>> >>>>> - Si >>>>> >>>>> Simon Weller/615-312-6068 >>>>> >>>>> -Original Message- >>>>> From: Asai [a...@globalchangemusic.org] >>>>> Received: Monday, 10 Jul 2017, 4:49PM >>>>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>>>> Subject: Re: KVM VM Snapshots >>>>> >>>>> Rather than 9.10 I meant 4.10. Rather than 9.2 I meant 4.9.2. Sorry. >>>>> >>>>> >>>>> On 7/10/2017 2:46 PM, Asai wrote: >>>>>> Greetings, >>>>>> >>>>>> Back in January there was a push to integrate the KVM snapshotting >>>>>> ability into the 9.10 trunk. I think this did get merged in, but 9.10 >>>>>> doesn't seem to be anywhere near release yet, so wondering if the devs >>>>>> can push the KVM snapshotting patch into the 9.2 trunk and release as >>>>>> a minor update? >>>>>> >>>>>> Asai >>>>>> >>>>> >>>>> >>>> >> >
Re: KVM VM Snapshots
Also, in upgrading Cloudstack from 4.9 to 4.10, does upgrading the cloudstack-agent cause any of the running VMs to reboot? Asai > On Aug 4, 2017, at 10:32 AM, Asai <a...@globalchangemusic.org> wrote: > > Are there any upgrade guides from 4.9 to 4.10? I’m looking for them in the > docs but there doesn’t seem to be anything there yet... > Asai > > >> On Jul 11, 2017, at 8:21 AM, Rubens Malheiro <rubens.malhe...@gmail.com> >> wrote: >> >> Thank you SI >> This will be incredible! and revolutionary. >> >> Cloudstack WINS! >> >> On Tue, Jul 11, 2017 at 11:21 AM, Simon Weller <swel...@ena.com.invalid> >> wrote: >> >>> The new VM Snapshot functionality for KVM supports disk and memory snaps. >>> This means you can recover a VM to a port in time, so assuming that's what >>> you are asking, yes it's hot. >>> >>> It does rely on QCOW2 disk formats right now. >>> >>> >>> - Si >>> >>> >>> >>> From: Rubens Malheiro <rubens.malhe...@gmail.com> >>> Sent: Monday, July 10, 2017 7:44 PM >>> To: users@cloudstack.apache.org >>> Subject: Re: KVM VM Snapshots >>> >>> Sorry to mess up >>> But a version 4.10 supported snapshot in KVM will it be hot? >>> On Mon, 10 Jul 2017 at 21:28 Simon Weller <swel...@ena.com.invalid> wrote: >>> >>>> Asai, >>>> >>>> 4.10 was approved last week. It should hit the repos with the next few >>>> days. >>>> >>>> - Si >>>> >>>> Simon Weller/615-312-6068 >>>> >>>> -Original Message- >>>> From: Asai [a...@globalchangemusic.org] >>>> Received: Monday, 10 Jul 2017, 4:49PM >>>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>>> Subject: Re: KVM VM Snapshots >>>> >>>> Rather than 9.10 I meant 4.10. Rather than 9.2 I meant 4.9.2. Sorry. >>>> >>>> >>>> On 7/10/2017 2:46 PM, Asai wrote: >>>>> Greetings, >>>>> >>>>> Back in January there was a push to integrate the KVM snapshotting >>>>> ability into the 9.10 trunk. I think this did get merged in, but 9.10 >>>>> doesn't seem to be anywhere near release yet, so wondering if the devs >>>>> can push the KVM snapshotting patch into the 9.2 trunk and release as >>>>> a minor update? >>>>> >>>>> Asai >>>>> >>>> >>>> >>> >
Re: KVM VM Snapshots
Are there any upgrade guides from 4.9 to 4.10? I’m looking for them in the docs but there doesn’t seem to be anything there yet... Asai > On Jul 11, 2017, at 8:21 AM, Rubens Malheiro <rubens.malhe...@gmail.com> > wrote: > > Thank you SI > This will be incredible! and revolutionary. > > Cloudstack WINS! > > On Tue, Jul 11, 2017 at 11:21 AM, Simon Weller <swel...@ena.com.invalid> > wrote: > >> The new VM Snapshot functionality for KVM supports disk and memory snaps. >> This means you can recover a VM to a port in time, so assuming that's what >> you are asking, yes it's hot. >> >> It does rely on QCOW2 disk formats right now. >> >> >> - Si >> >> >> >> From: Rubens Malheiro <rubens.malhe...@gmail.com> >> Sent: Monday, July 10, 2017 7:44 PM >> To: users@cloudstack.apache.org >> Subject: Re: KVM VM Snapshots >> >> Sorry to mess up >> But a version 4.10 supported snapshot in KVM will it be hot? >> On Mon, 10 Jul 2017 at 21:28 Simon Weller <swel...@ena.com.invalid> wrote: >> >>> Asai, >>> >>> 4.10 was approved last week. It should hit the repos with the next few >>> days. >>> >>> - Si >>> >>> Simon Weller/615-312-6068 >>> >>> -Original Message- >>> From: Asai [a...@globalchangemusic.org] >>> Received: Monday, 10 Jul 2017, 4:49PM >>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>> Subject: Re: KVM VM Snapshots >>> >>> Rather than 9.10 I meant 4.10. Rather than 9.2 I meant 4.9.2. Sorry. >>> >>> >>> On 7/10/2017 2:46 PM, Asai wrote: >>>> Greetings, >>>> >>>> Back in January there was a push to integrate the KVM snapshotting >>>> ability into the 9.10 trunk. I think this did get merged in, but 9.10 >>>> doesn't seem to be anywhere near release yet, so wondering if the devs >>>> can push the KVM snapshotting patch into the 9.2 trunk and release as >>>> a minor update? >>>> >>>> Asai >>>> >>> >>> >>
Re: KVM VM Snapshots
Rather than 9.10 I meant 4.10. Rather than 9.2 I meant 4.9.2. Sorry. On 7/10/2017 2:46 PM, Asai wrote: Greetings, Back in January there was a push to integrate the KVM snapshotting ability into the 9.10 trunk. I think this did get merged in, but 9.10 doesn't seem to be anywhere near release yet, so wondering if the devs can push the KVM snapshotting patch into the 9.2 trunk and release as a minor update? Asai
KVM VM Snapshots
Greetings, Back in January there was a push to integrate the KVM snapshotting ability into the 9.10 trunk. I think this did get merged in, but 9.10 doesn't seem to be anywhere near release yet, so wondering if the devs can push the KVM snapshotting patch into the 9.2 trunk and release as a minor update? Asai
Re: 1 Pod Host Offline, All VMs Shut Down
OK, thanks, Simon. That makes sense, after brushing up on STP again, I configured a root switch where it should be, and hopefully that would have solved the problem. So, in Cloudstack, VMs will just shut down if storage is cut off? On 2017-03-18 4:00 PM, Simon Weller wrote: What I'm saying is that if your root bridge changed, it would cause all ports (potentially on both switches) to go into blocking and then learning mode. If that happened, it would have killed all of your storage. From: Asai <a...@globalchangemusic.org> Sent: Saturday, March 18, 2017 10:53 AM To: users@cloudstack.apache.org Subject: RE: 1 Pod Host Offline, All VMs Shut Down Thanks, Simon. But I still don't understand why the VMs that are on host 1 and connected to PS 1 would be affected by a disconnect of another host and storage that they're not dependent on. Will setting up the root switch for STP handle that issue? On March 17, 2017 6:10:05 PM MST, Simon Weller <swel...@ena.com> wrote: If no network storage path was available, more than likely fencing would occur. Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 7:40PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: RE: 1 Pod Host Offline, All VMs Shut Down Ok, but why would all the VMs shut down? Is that an expected behavior? On March 17, 2017 5:15:46 PM MST, Simon Weller <swel...@ena.com> wrote: So check your switch logs. It's possible all your switch ports went into blocking mode when you lost the interswitch fiber patch. Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 6:56PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: RE: 1 Pod Host Offline, All VMs Shut Down Haven't configured that... [facepalm] On March 17, 2017 4:46:24 PM MST, Simon Weller <swel...@ena.com> wrote: Which switch is root bridge for spanning tree? Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 6:29PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: Re: 1 Pod Host Offline, All VMs Shut Down Yes, the cable was a fiber cable connecting two switches, one which connects host 1 and PS 1 (NFS share running on host 1), and the other switch connects host 2 and PS 2 (NFS share running on host 2). On 2017-03-17 4:04 PM, Simon Weller wrote: So was this cable connecting a couple of switches together? I'm confused when you say a single cable was connecting 4 devices, unless there is a switch or multiple switches involved. Is the primary storage NFS? Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 6:00PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: Re: 1 Pod Host Offline, All VMs Shut Down KVM, and the damaged cable connected host 2 and PS 2 back to the host 1 and PS 1. Does that give you enough info to work with? On 2017-03-17 3:55 PM, Simon Weller wrote: What hypervisor are you using and what did the damaged cable connect? Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 5:45PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: 1 Pod Host Offline, All VMs Shut Down Greetings, I have 2 hosts in a pod, and 3 primary storage shares. I just ran into an issue where a cable accidentally got damaged which took host 2 offline and primary storage 2 offline, but ALL of my VMs shut down spontaneously. Weird, because only half of my VMs are running on host 2 and primary storage 2. Even the VMs on host 1 and primary storage 1 shut down. Can anyone shed light on why this would have happened and how I can avoid this in the future? Much thanks, Asai -- Asai -- Asai -- Asai
RE: 1 Pod Host Offline, All VMs Shut Down
Thanks, Simon. But I still don't understand why the VMs that are on host 1 and connected to PS 1 would be affected by a disconnect of another host and storage that they're not dependent on. Will setting up the root switch for STP handle that issue? On March 17, 2017 6:10:05 PM MST, Simon Weller <swel...@ena.com> wrote: >If no network storage path was available, more than likely fencing >would occur. > > >Simon Weller/615-312-6068 > >-Original Message- >From: Asai [a...@globalchangemusic.org] >Received: Friday, 17 Mar 2017, 7:40PM >To: users@cloudstack.apache.org [users@cloudstack.apache.org] >Subject: RE: 1 Pod Host Offline, All VMs Shut Down > >Ok, but why would all the VMs shut down? Is that an expected behavior? > >On March 17, 2017 5:15:46 PM MST, Simon Weller <swel...@ena.com> wrote: >>So check your switch logs. It's possible all your switch ports went >>into blocking mode when you lost the interswitch fiber patch. >> >>Simon Weller/615-312-6068 >> >>-Original Message- >>From: Asai [a...@globalchangemusic.org] >>Received: Friday, 17 Mar 2017, 6:56PM >>To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>Subject: RE: 1 Pod Host Offline, All VMs Shut Down >> >>Haven't configured that... [facepalm] >> >>On March 17, 2017 4:46:24 PM MST, Simon Weller <swel...@ena.com> >wrote: >>>Which switch is root bridge for spanning tree? >>> >>>Simon Weller/615-312-6068 >>> >>>-Original Message- >>>From: Asai [a...@globalchangemusic.org] >>>Received: Friday, 17 Mar 2017, 6:29PM >>>To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>>Subject: Re: 1 Pod Host Offline, All VMs Shut Down >>> >>>Yes, the cable was a fiber cable connecting two switches, one which >>>connects host 1 and PS 1 (NFS share running on host 1), and the other >>>switch connects host 2 and PS 2 (NFS share running on host 2). >>> >>> >>>On 2017-03-17 4:04 PM, Simon Weller wrote: >>>> So was this cable connecting a couple of switches together? I'm >>>confused when you say a single cable was connecting 4 devices, unless >>>there is a switch or multiple switches involved. Is the primary >>storage >>>NFS? >>>> >>>> Simon Weller/615-312-6068 >>>> >>>> -Original Message- >>>> From: Asai [a...@globalchangemusic.org] >>>> Received: Friday, 17 Mar 2017, 6:00PM >>>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>>> Subject: Re: 1 Pod Host Offline, All VMs Shut Down >>>> >>>> KVM, and the damaged cable connected host 2 and PS 2 back to the >>host >>>1 >>>> and PS 1. Does that give you enough info to work with? >>>> >>>> >>>> On 2017-03-17 3:55 PM, Simon Weller wrote: >>>>> What hypervisor are you using and what did the damaged cable >>>connect? >>>>> >>>>> Simon Weller/615-312-6068 >>>>> >>>>> -Original Message- >>>>> From: Asai [a...@globalchangemusic.org] >>>>> Received: Friday, 17 Mar 2017, 5:45PM >>>>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>>>> Subject: 1 Pod Host Offline, All VMs Shut Down >>>>> >>>>> Greetings, >>>>> >>>>> I have 2 hosts in a pod, and 3 primary storage shares. I just ran >>>into >>>>> an issue where a cable accidentally got damaged which took host 2 >>>>> offline and primary storage 2 offline, but ALL of my VMs shut down >>>>> spontaneously. Weird, because only half of my VMs are running on >>>host 2 >>>>> and primary storage 2. Even the VMs on host 1 and primary storage >>1 >>>>> shut down. Can anyone shed light on why this would have happened >>>and >>>>> how I can avoid this in the future? >>>>> >>>>> Much thanks, >>>>> Asai >>>>> >>>>> >>>> >> >>-- >>Asai > >-- >Asai -- Asai
RE: 1 Pod Host Offline, All VMs Shut Down
Ok, but why would all the VMs shut down? Is that an expected behavior? On March 17, 2017 5:15:46 PM MST, Simon Weller <swel...@ena.com> wrote: >So check your switch logs. It's possible all your switch ports went >into blocking mode when you lost the interswitch fiber patch. > >Simon Weller/615-312-6068 > >-Original Message- >From: Asai [a...@globalchangemusic.org] >Received: Friday, 17 Mar 2017, 6:56PM >To: users@cloudstack.apache.org [users@cloudstack.apache.org] >Subject: RE: 1 Pod Host Offline, All VMs Shut Down > >Haven't configured that... [facepalm] > >On March 17, 2017 4:46:24 PM MST, Simon Weller <swel...@ena.com> wrote: >>Which switch is root bridge for spanning tree? >> >>Simon Weller/615-312-6068 >> >>-Original Message- >>From: Asai [a...@globalchangemusic.org] >>Received: Friday, 17 Mar 2017, 6:29PM >>To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>Subject: Re: 1 Pod Host Offline, All VMs Shut Down >> >>Yes, the cable was a fiber cable connecting two switches, one which >>connects host 1 and PS 1 (NFS share running on host 1), and the other >>switch connects host 2 and PS 2 (NFS share running on host 2). >> >> >>On 2017-03-17 4:04 PM, Simon Weller wrote: >>> So was this cable connecting a couple of switches together? I'm >>confused when you say a single cable was connecting 4 devices, unless >>there is a switch or multiple switches involved. Is the primary >storage >>NFS? >>> >>> Simon Weller/615-312-6068 >>> >>> -Original Message- >>> From: Asai [a...@globalchangemusic.org] >>> Received: Friday, 17 Mar 2017, 6:00PM >>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>> Subject: Re: 1 Pod Host Offline, All VMs Shut Down >>> >>> KVM, and the damaged cable connected host 2 and PS 2 back to the >host >>1 >>> and PS 1. Does that give you enough info to work with? >>> >>> >>> On 2017-03-17 3:55 PM, Simon Weller wrote: >>>> What hypervisor are you using and what did the damaged cable >>connect? >>>> >>>> Simon Weller/615-312-6068 >>>> >>>> -Original Message- >>>> From: Asai [a...@globalchangemusic.org] >>>> Received: Friday, 17 Mar 2017, 5:45PM >>>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>>> Subject: 1 Pod Host Offline, All VMs Shut Down >>>> >>>> Greetings, >>>> >>>> I have 2 hosts in a pod, and 3 primary storage shares. I just ran >>into >>>> an issue where a cable accidentally got damaged which took host 2 >>>> offline and primary storage 2 offline, but ALL of my VMs shut down >>>> spontaneously. Weird, because only half of my VMs are running on >>host 2 >>>> and primary storage 2. Even the VMs on host 1 and primary storage >1 >>>> shut down. Can anyone shed light on why this would have happened >>and >>>> how I can avoid this in the future? >>>> >>>> Much thanks, >>>> Asai >>>> >>>> >>> > >-- >Asai -- Asai
RE: 1 Pod Host Offline, All VMs Shut Down
Haven't configured that... [facepalm] On March 17, 2017 4:46:24 PM MST, Simon Weller <swel...@ena.com> wrote: >Which switch is root bridge for spanning tree? > >Simon Weller/615-312-6068 > >-Original Message- >From: Asai [a...@globalchangemusic.org] >Received: Friday, 17 Mar 2017, 6:29PM >To: users@cloudstack.apache.org [users@cloudstack.apache.org] >Subject: Re: 1 Pod Host Offline, All VMs Shut Down > >Yes, the cable was a fiber cable connecting two switches, one which >connects host 1 and PS 1 (NFS share running on host 1), and the other >switch connects host 2 and PS 2 (NFS share running on host 2). > > >On 2017-03-17 4:04 PM, Simon Weller wrote: >> So was this cable connecting a couple of switches together? I'm >confused when you say a single cable was connecting 4 devices, unless >there is a switch or multiple switches involved. Is the primary storage >NFS? >> >> Simon Weller/615-312-6068 >> >> -Original Message- >> From: Asai [a...@globalchangemusic.org] >> Received: Friday, 17 Mar 2017, 6:00PM >> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >> Subject: Re: 1 Pod Host Offline, All VMs Shut Down >> >> KVM, and the damaged cable connected host 2 and PS 2 back to the host >1 >> and PS 1. Does that give you enough info to work with? >> >> >> On 2017-03-17 3:55 PM, Simon Weller wrote: >>> What hypervisor are you using and what did the damaged cable >connect? >>> >>> Simon Weller/615-312-6068 >>> >>> -Original Message- >>> From: Asai [a...@globalchangemusic.org] >>> Received: Friday, 17 Mar 2017, 5:45PM >>> To: users@cloudstack.apache.org [users@cloudstack.apache.org] >>> Subject: 1 Pod Host Offline, All VMs Shut Down >>> >>> Greetings, >>> >>> I have 2 hosts in a pod, and 3 primary storage shares. I just ran >into >>> an issue where a cable accidentally got damaged which took host 2 >>> offline and primary storage 2 offline, but ALL of my VMs shut down >>> spontaneously. Weird, because only half of my VMs are running on >host 2 >>> and primary storage 2. Even the VMs on host 1 and primary storage 1 >>> shut down. Can anyone shed light on why this would have happened >and >>> how I can avoid this in the future? >>> >>> Much thanks, >>> Asai >>> >>> >> -- Asai
Re: 1 Pod Host Offline, All VMs Shut Down
Yes, the cable was a fiber cable connecting two switches, one which connects host 1 and PS 1 (NFS share running on host 1), and the other switch connects host 2 and PS 2 (NFS share running on host 2). On 2017-03-17 4:04 PM, Simon Weller wrote: So was this cable connecting a couple of switches together? I'm confused when you say a single cable was connecting 4 devices, unless there is a switch or multiple switches involved. Is the primary storage NFS? Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 6:00PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: Re: 1 Pod Host Offline, All VMs Shut Down KVM, and the damaged cable connected host 2 and PS 2 back to the host 1 and PS 1. Does that give you enough info to work with? On 2017-03-17 3:55 PM, Simon Weller wrote: What hypervisor are you using and what did the damaged cable connect? Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 5:45PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: 1 Pod Host Offline, All VMs Shut Down Greetings, I have 2 hosts in a pod, and 3 primary storage shares. I just ran into an issue where a cable accidentally got damaged which took host 2 offline and primary storage 2 offline, but ALL of my VMs shut down spontaneously. Weird, because only half of my VMs are running on host 2 and primary storage 2. Even the VMs on host 1 and primary storage 1 shut down. Can anyone shed light on why this would have happened and how I can avoid this in the future? Much thanks, Asai
Re: 1 Pod Host Offline, All VMs Shut Down
KVM, and the damaged cable connected host 2 and PS 2 back to the host 1 and PS 1. Does that give you enough info to work with? On 2017-03-17 3:55 PM, Simon Weller wrote: What hypervisor are you using and what did the damaged cable connect? Simon Weller/615-312-6068 -Original Message- From: Asai [a...@globalchangemusic.org] Received: Friday, 17 Mar 2017, 5:45PM To: users@cloudstack.apache.org [users@cloudstack.apache.org] Subject: 1 Pod Host Offline, All VMs Shut Down Greetings, I have 2 hosts in a pod, and 3 primary storage shares. I just ran into an issue where a cable accidentally got damaged which took host 2 offline and primary storage 2 offline, but ALL of my VMs shut down spontaneously. Weird, because only half of my VMs are running on host 2 and primary storage 2. Even the VMs on host 1 and primary storage 1 shut down. Can anyone shed light on why this would have happened and how I can avoid this in the future? Much thanks, Asai
1 Pod Host Offline, All VMs Shut Down
Greetings, I have 2 hosts in a pod, and 3 primary storage shares. I just ran into an issue where a cable accidentally got damaged which took host 2 offline and primary storage 2 offline, but ALL of my VMs shut down spontaneously. Weird, because only half of my VMs are running on host 2 and primary storage 2. Even the VMs on host 1 and primary storage 1 shut down. Can anyone shed light on why this would have happened and how I can avoid this in the future? Much thanks, Asai
Re: Info on 4.9.2 release
Does this version of Cloudstack support Xenserver 7 yet? On 2017-01-06 3:23 PM, Asai wrote: Thank you so much. On 2017-01-06 3:12 PM, Simon Weller wrote: Asai, Release notes should follow shortly. Point releases don't have any DB updates. It should be as simple as shutting down your agents and management server and then upgrading deb or rpm packages and then restarting everything. Always backup your database before trying any updates. - Si From: Asai <a...@globalchangemusic.org> Sent: Friday, January 6, 2017 4:06 PM To: users@cloudstack.apache.org Subject: Info on 4.9.2 release Greetings, Can anyone point me to info about the latest 4.9.2 release and the proper procedure for updating from 4.9? Thank you, Asai
Re: Info on 4.9.2 release
Thank you so much. On 2017-01-06 3:12 PM, Simon Weller wrote: Asai, Release notes should follow shortly. Point releases don't have any DB updates. It should be as simple as shutting down your agents and management server and then upgrading deb or rpm packages and then restarting everything. Always backup your database before trying any updates. - Si From: Asai <a...@globalchangemusic.org> Sent: Friday, January 6, 2017 4:06 PM To: users@cloudstack.apache.org Subject: Info on 4.9.2 release Greetings, Can anyone point me to info about the latest 4.9.2 release and the proper procedure for updating from 4.9? Thank you, Asai
Info on 4.9.2 release
Greetings, Can anyone point me to info about the latest 4.9.2 release and the proper procedure for updating from 4.9? Thank you, Asai
Re: KVM Live VM Snapshots
Thanks for the feedback. I guess I need a little more assistance in understanding the correct procedure for disaster recovery. With XenServer I have my VMs snapshotted then exported to backup weekly. So I guess I've never really used snapshots except to then export the snapshot as a backup VM. So... How can I do that in Cloudstack with KVM? Or is there a better solution for disaster recovery that doesn't cost $$$? On December 19, 2016 6:01:02 AM MST, Simon Weller <swel...@ena.com> wrote: >There is a pending PR awaiting merge that added VM snapshots to ACS for >KVM. > > >https://github.com/apache/cloudstack/pull/977 > > >In regards to why it hasn't been merged, that's a good question. Why >don't you comment on the PR and ask that question. Community >involvement is the best way to get features moving forward. > >I do agree though with Macro that snapshots are not really designed for >BCDR purposes. > > >- Si > > >From: Marc-Aurèle Brothier <ma...@exoscale.ch> >Sent: Monday, December 19, 2016 2:31 AM >To: users@cloudstack.apache.org >Subject: Re: KVM Live VM Snapshots > >Hi Asai, > >In my opinion, doing a VM snapshot is making a step in the wrong >direction. >Your applications/system running inside your VMs should be designed to >handle an OS crash. Then a new VM, freshly installed, should be able to >get >back into your application setup so that you have again an appropriate >number of healthy nodes. > >Marco > >On Mon, Dec 19, 2016 at 4:34 AM, Asai <a...@globalchangemusic.org> >wrote: > >> Greetings, >> >> Is it correct that currently there is no support in Cloudstack for >KVM >> live VM snapshots? I see that Volume snapshots are available for >running >> VMs, but that makes me wonder what everyone is doing to get a >disaster >> recovery backup of a KVM based VM? I did ask this question a few >weeks >> back, but only one person responded with one solution, and I am >really >> trying to figure out what the best solutions are here. >> >> Has anybody seen this script? https://gist.github.com/ringe/ >[https://assets-cdn.github.com/images/modules/open_graph/github-logo.png]<https://gist.github.com/ringe/> > >ringe's gists · GitHub<https://gist.github.com/ringe/> >gist.github.com >Maximum users of an application, limit license usage on Windows >Terminal Server View maximum_users.ps1. # name of procsess we are >tracking $ limited_process ... > > > >> 334ee88ba5451c8f5732 >> >> What is the community's opinion of scripts like this? And also, big >> question, if this script is good, why isn't it integrated into >Cloudstack? >> >> Thanks, >> Asai >> >> -- Asai
KVM Live VM Snapshots
Greetings, Is it correct that currently there is no support in Cloudstack for KVM live VM snapshots? I see that Volume snapshots are available for running VMs, but that makes me wonder what everyone is doing to get a disaster recovery backup of a KVM based VM? I did ask this question a few weeks back, but only one person responded with one solution, and I am really trying to figure out what the best solutions are here. Has anybody seen this script? https://gist.github.com/ringe/334ee88ba5451c8f5732 What is the community's opinion of scripts like this? And also, big question, if this script is good, why isn't it integrated into Cloudstack? Thanks, Asai
RE: SSVM Creation Failure with Advanced Zone
Hi, Simon. I'm using KVM. Here's a sample of the Management Server logs: http://pastebin.com/DJvmRfRg And Agent logs: http://pastebin.com/Cpmm3w5M On 2016-11-19 14:55, Simon Weller wrote: > Can you post some management server and agent logs? > > What hypervisor are you using? > > Simon Weller/ENA > (615) 312-6068 > > -Original Message- > From: Asai [a...@globalchangemusic.org] > Received: Saturday, 19 Nov 2016, 12:04PM > To: users@cloudstack.apache.org [users@cloudstack.apache.org] > Subject: SSVM Creation Failure with Advanced Zone > > Hello, > > Hopefully I can gain some insight here. When I create a basic zone > using the wizard, everything goes smoothly and the Secondary Storage > works great. But--and I know I'm missing something here, I just don't > know what--when I try to set up an advanced zone I always get this error: > > Secondary Storage Vm creation failure. zone: Av1, error details: null > > Secondary storage seems to be mounting normally now, and is in the same > subnet as Management server and Pod. I have 1 NIC that's set up to > support 2 VLANS and does management traffic on its NON VLAN IP. e.g. NIC > 1 IP is 192.168.100.202 (cloudbr0), NIC 1 Public VLAN is VLAN 210 > (cloudbr1), and NIC 1 Private for guest traffic is VLAN 220 (cloudbr2). > Again, this setup seems to work OK with a basic zone, but not for advanced. > > Can anyone offer any direction? > > Thanks, > Asai
Re: SSVM Creation Failure with Advanced Zone
I don't know if it's relevant, but no Virtual Router has been created either. On 2016-11-19 11:04 AM, Asai wrote: Secondary Storage Vm creation failure. zone: Av1, error details: null
SSVM Creation Failure with Advanced Zone
Hello, Hopefully I can gain some insight here. When I create a basic zone using the wizard, everything goes smoothly and the Secondary Storage works great. But--and I know I'm missing something here, I just don't know what--when I try to set up an advanced zone I always get this error: Secondary Storage Vm creation failure. zone: Av1, error details: null Secondary storage seems to be mounting normally now, and is in the same subnet as Management server and Pod. I have 1 NIC that's set up to support 2 VLANS and does management traffic on its NON VLAN IP. e.g. NIC 1 IP is 192.168.100.202 (cloudbr0), NIC 1 Public VLAN is VLAN 210 (cloudbr1), and NIC 1 Private for guest traffic is VLAN 220 (cloudbr2). Again, this setup seems to work OK with a basic zone, but not for advanced. Can anyone offer any direction? Thanks, Asai
Re: SSVM NFS Access Denied Problems
Hi Dag, Yes, what I found was that I had not properly set up the System VMs on the NFS server. Before I had created a basic zone and it seemed to have done this automatically, but when I went to do an advanced zone there apparently was more work to do. So lesson learned! Asai Network and Systems Administrator GLOBAL CHANGE MEDIA office: 520.398.2542 http://globalchange.media Tucson, AZ > On Nov 15, 2016, at 2:06 AM, Dag Sonstebo <dag.sonst...@shapeblue.com> wrote: > > Hi Asai, > > Can you give some more feedback on what your previous testing outcome was? > You say you can mount from the mgmt. server – can you read and write? How > about from the hypervisors? > > Regards, > Dag Sonstebo > > On 15/11/2016, 04:53, "Asai" <a...@globalchangemusic.org> wrote: > >Thanks again. So, I've made a little more progress on this, what does >it mean if I can mount the NFS share from the Management Server CLI, but >still get access denied after adding an advanced zone in the Management > GUI? > > >On 2016-11-13 4:13 PM, Dag Sonstebo wrote: >> Hi Asai, >> >> I don’t have much experience with Nas4Free – so I can’t comment – however I >> do know the CloudStack / hypervisor / SSVM requirements are fairly straight >> forward, so it sounds like your NAS appliance is possibly to blame. >> >> A few things to try: >> >> - Try to manually mount a NFS share from your management server (as already >> mentioned by Sergey). If you can mount the share make sure you can >> read/write. >> - Try to manually mount the same NFS share from your hypervisors. Again try >> to read/wite. >> - Log in to your SSVM and run /usr/local/cloud/systemvm-ssvm-check.sh. >> >> Between these three it will tell you if you have any NFS access at all. If >> you don’t then sounds like you need to spend some more time on your NAS4Free >> box – and in this case I would just concentrate on a single client – e.g. >> the management server – until you have worked out what the issue is. Once >> resolved you can move back to testing with your hypervisors and CloudStack. >> So – in other words – don’t waste your time trying to troubleshoot this >> through CloudStack until you have the underlying connectivity and access >> sorted. >> >> The other thing is obviously to configure your NAS4Free box with as verbose >> logging as possible and check why it denies access. >> >> Regards, >> Dag Sonstebo >> >> On 13/11/2016, 22:48, "Asai" <a...@globalchangemusic.org> wrote: >> >> Thanks Dag, >> >> The thing that's really got me right now is even if I set the allowed >> network subnet to /16 which contains all necessary subnets I get access >> denied. Even when I mount from the CLI as root I get access denied... >> Maybe it's a NAS4Free bug? NAS4Free runs on FreeBSD... Maybe there's some >> subtle incompatibility? >> >> On November 13, 2016 2:41:23 AM MST, Dag Sonstebo >> <dag.sonst...@shapeblue.com> wrote: >>> Asai, >>> >>> Also keep in mind your SSVM will utilize an IP address from your pod >>> management range – so you need to allow NFS share access from this. >>> >>> Regards, >>> Dag Sonstebo >>> Cloud Architect >>> ShapeBlue >>> >>> On 12/11/2016, 20:14, "Sergey Levitskiy" >>> <sergey.levits...@autodesk.com> wrote: >>> >>> Export NFS share so that root can mount it. Also you can try manually >>> mount it from management server and see if you can write to it >>> >>> Sent from my iPhone >>> >>>> On Nov 12, 2016, at 10:54 AM, Asai <a...@globalchangemusic.org> >>> wrote: >>>> >>>> Greetings, >>>> >>>> Going a little nuts here. I've been attempting to create an advanced >>> zone with my Secondary Storage on a separate NFS server running >>> NAS4Free. The problem is I keep getting an access denied while trying >>> to mount error and I cannot figure out why this is. The directory is >>> blank on the NFS server, permissions are set to 777, All Dirs option is >>> enabled, allowed networks are set to allow the same subnet, both the >>> Cloudstack MGMT network and secondary storage server are on the same >>> subnet, but I can't seem to figure this out... does anyone have any >>> brilliant insights into this maddening problem? >>>> >>>> Thanks!!! >>>> >>> >>> >>> >>> dag.sonst...@shapeblue.com >>> www.shapeblue.com >>> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >>> @shapeblue >>> >>> >> >> -- >> Asai >> >> >> dag.sonst...@shapeblue.com >> www.shapeblue.com >> 53 Chandos Place, Covent Garden, London WC2N 4HSUK >> @shapeblue >> >> >> > > > > > dag.sonst...@shapeblue.com > www.shapeblue.com > 53 Chandos Place, Covent Garden, London WC2N 4HSUK > @shapeblue > > >
Re: SSVM NFS Access Denied Problems
Thanks again. So, I've made a little more progress on this, what does it mean if I can mount the NFS share from the Management Server CLI, but still get access denied after adding an advanced zone in the Management GUI? On 2016-11-13 4:13 PM, Dag Sonstebo wrote: Hi Asai, I don’t have much experience with Nas4Free – so I can’t comment – however I do know the CloudStack / hypervisor / SSVM requirements are fairly straight forward, so it sounds like your NAS appliance is possibly to blame. A few things to try: - Try to manually mount a NFS share from your management server (as already mentioned by Sergey). If you can mount the share make sure you can read/write. - Try to manually mount the same NFS share from your hypervisors. Again try to read/wite. - Log in to your SSVM and run /usr/local/cloud/systemvm-ssvm-check.sh. Between these three it will tell you if you have any NFS access at all. If you don’t then sounds like you need to spend some more time on your NAS4Free box – and in this case I would just concentrate on a single client – e.g. the management server – until you have worked out what the issue is. Once resolved you can move back to testing with your hypervisors and CloudStack. So – in other words – don’t waste your time trying to troubleshoot this through CloudStack until you have the underlying connectivity and access sorted. The other thing is obviously to configure your NAS4Free box with as verbose logging as possible and check why it denies access. Regards, Dag Sonstebo On 13/11/2016, 22:48, "Asai" <a...@globalchangemusic.org> wrote: Thanks Dag, The thing that's really got me right now is even if I set the allowed network subnet to /16 which contains all necessary subnets I get access denied. Even when I mount from the CLI as root I get access denied... Maybe it's a NAS4Free bug? NAS4Free runs on FreeBSD... Maybe there's some subtle incompatibility? On November 13, 2016 2:41:23 AM MST, Dag Sonstebo <dag.sonst...@shapeblue.com> wrote: >Asai, > >Also keep in mind your SSVM will utilize an IP address from your pod >management range – so you need to allow NFS share access from this. > >Regards, >Dag Sonstebo >Cloud Architect >ShapeBlue > >On 12/11/2016, 20:14, "Sergey Levitskiy" ><sergey.levits...@autodesk.com> wrote: > >Export NFS share so that root can mount it. Also you can try manually >mount it from management server and see if you can write to it > >Sent from my iPhone > >> On Nov 12, 2016, at 10:54 AM, Asai <a...@globalchangemusic.org> >wrote: >> >> Greetings, >> >> Going a little nuts here. I've been attempting to create an advanced >zone with my Secondary Storage on a separate NFS server running >NAS4Free. The problem is I keep getting an access denied while trying >to mount error and I cannot figure out why this is. The directory is >blank on the NFS server, permissions are set to 777, All Dirs option is >enabled, allowed networks are set to allow the same subnet, both the >Cloudstack MGMT network and secondary storage server are on the same >subnet, but I can't seem to figure this out... does anyone have any >brilliant insights into this maddening problem? >> >> Thanks!!! >> > > > >dag.sonst...@shapeblue.com >www.shapeblue.com >53 Chandos Place, Covent Garden, London WC2N 4HSUK >@shapeblue > > -- Asai dag.sonst...@shapeblue.com www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue
Re: SSVM NFS Access Denied Problems
Thanks Dag, The thing that's really got me right now is even if I set the allowed network subnet to /16 which contains all necessary subnets I get access denied. Even when I mount from the CLI as root I get access denied... Maybe it's a NAS4Free bug? NAS4Free runs on FreeBSD... Maybe there's some subtle incompatibility? On November 13, 2016 2:41:23 AM MST, Dag Sonstebo <dag.sonst...@shapeblue.com> wrote: >Asai, > >Also keep in mind your SSVM will utilize an IP address from your pod >management range – so you need to allow NFS share access from this. > >Regards, >Dag Sonstebo >Cloud Architect >ShapeBlue > >On 12/11/2016, 20:14, "Sergey Levitskiy" ><sergey.levits...@autodesk.com> wrote: > >Export NFS share so that root can mount it. Also you can try manually >mount it from management server and see if you can write to it > >Sent from my iPhone > >> On Nov 12, 2016, at 10:54 AM, Asai <a...@globalchangemusic.org> >wrote: >> >> Greetings, >> >> Going a little nuts here. I've been attempting to create an advanced >zone with my Secondary Storage on a separate NFS server running >NAS4Free. The problem is I keep getting an access denied while trying >to mount error and I cannot figure out why this is. The directory is >blank on the NFS server, permissions are set to 777, All Dirs option is >enabled, allowed networks are set to allow the same subnet, both the >Cloudstack MGMT network and secondary storage server are on the same >subnet, but I can't seem to figure this out... does anyone have any >brilliant insights into this maddening problem? >> >> Thanks!!! >> > > > >dag.sonst...@shapeblue.com >www.shapeblue.com >53 Chandos Place, Covent Garden, London WC2N 4HSUK >@shapeblue > > -- Asai
SSVM NFS Access Denied Problems
Greetings, Going a little nuts here. I've been attempting to create an advanced zone with my Secondary Storage on a separate NFS server running NAS4Free. The problem is I keep getting an access denied while trying to mount error and I cannot figure out why this is. The directory is blank on the NFS server, permissions are set to 777, All Dirs option is enabled, allowed networks are set to allow the same subnet, both the Cloudstack MGMT network and secondary storage server are on the same subnet, but I can't seem to figure this out... does anyone have any brilliant insights into this maddening problem? Thanks!!!
Re: Good backup solutions for Cloudstack
Thanks for that. How about backing up Secondary Storage and the Cloudstack installation in general? On 2016-11-04 3:51 AM, Glenn Wagner wrote: HI, For KVM its more disk based LVM , you can use scripts like https://github.com/Win2ix/vmsnapshot or I found a more commercial version http://www.acronis.com/en-us/business/backup-advanced/rhev/ Thanks Glenn glenn.wag...@shapeblue.com www.shapeblue.com First Floor, Victoria Centre, 7 Victoria Street, Somerset West, Cape Town 7129South Africa @shapeblue -Original Message- From: a...@globalchangemusic.org<mailto:a...@globalchangemusic.org> Reply-to: <users@cloudstack.apache.org> To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org> Subject: Re: Good backup solutions for Cloudstack Date: Thu, 03 Nov 2016 09:12:08 -0700 How about KVM? On 2016-11-02 16:47, Sergey Levitskiy wrote: Veeam works OK for VMware based implementations. You can tag VMs and based on vsphere tag Veeam will automatically pick them up for the backup processing. On 11/2/16, 4:21 PM, "Asai" <a...@globalchangemusic.org<mailto:a...@globalchangemusic.org>> wrote: Hello, Can anyone recommend a good backup solution for a Cloudstack deployment? What's the best way of backing up VMs and snapshots? I have experience with XenServer, but I'm moving into a CS deployment now and am looking for recommendations on best practices. Thanks Asai Network and Systems Administrator GLOBAL CHANGE MEDIA http://globalchange.media [1] Tucson, AZ Links: -- [1] http://globalchange.media
Re: Good backup solutions for Cloudstack
How about KVM? On 2016-11-02 16:47, Sergey Levitskiy wrote: > Veeam works OK for VMware based implementations. You can tag VMs and based on > vsphere tag Veeam will automatically pick them up for the backup processing. > > On 11/2/16, 4:21 PM, "Asai" <a...@globalchangemusic.org> wrote: > > Hello, > > Can anyone recommend a good backup solution for a Cloudstack deployment? > What's the best way of backing up VMs and snapshots? I have experience with > XenServer, but I'm moving into a CS deployment now and am looking for > recommendations on best practices. > > Thanks > Asai > Network and Systems Administrator > GLOBAL CHANGE MEDIA > http://globalchange.media [1] > Tucson, AZ Links: -- [1] http://globalchange.media
Good backup solutions for Cloudstack
Hello, Can anyone recommend a good backup solution for a Cloudstack deployment? What’s the best way of backing up VMs and snapshots? I have experience with XenServer, but I’m moving into a CS deployment now and am looking for recommendations on best practices. Thanks Asai Network and Systems Administrator GLOBAL CHANGE MEDIA http://globalchange.media Tucson, AZ
Reusing ISO from Secondary Storage
Greetings, Noob question. I had an instance of Cloudstack up and running which I scrapped and started over. In that instance I had downloaded and registered an ISO which is now in secondary storage. In my new instance, I would like to reuse that ISO but CloudStack is not seeing it. How do I retrieve an ISO from secondary storage for reuse? Asai Network and Systems Administrator GLOBAL CHANGE MEDIA office: 520.398.2542 http://globalchange.media Tucson, AZ