Anshul and Lotic, after debugging and inspecting code for quite a while I
understood the complete picture and created a solution for it. The PR with
the solution is found athttps://github.com/apache/cloudstack/pull/2315 .
This was a clear example of what I constantly tell people here, code needs
to be clear, concise, well tested and documented. Here follows some
explanation regarding the removal of some code introduced with commit
“2c4ea503f92bcf9c611f409d5cdecb”.

With commit “8caf52c” Daan and I removed a small piece of code introduced
by “2c4ea503f92bcf9c611f409d5cdecb” because it did not seem to make much
sense. We did not remove everything; we only removed the part that was
looking for a random host in the zone to execute the command. As we
(Anshul) exchanged some messages in some other PR, the code introduced in
“2c4ea503f92bcf9c611f409d5cdecb” would only work for zones that do not have
other hypervisor types if they are deployed with XenServer clusters.It was
being created a limitation in ACS that should not exist. I stress again
that this only happened for the lack of documentation and clear coding.
For instance, when I read the commit “2c4ea503f92bcf9c611f409d5cdecb”, it
says it introduced an “optimization”, then I assumed that the process
executed with our code base before commit “2c4ea503f92bcf9c611f409d5cdecb”
is working, but not as fast or with as much quality as the code with
“2c4ea503f92bcf9c611f409d5cdecb”. However, that is not the case; the code
“2c4ea503f92bcf9c611f409d5cdecb” is not optimizing anything; it is, in
fact, fixing/creating a workflow to create templates from snapshots in
XenServer deployments.

The first PR(#1176) intended to solve #CLOUDSTACK-9025 was only tackling
the problem for CloudStack deployments that use single hypervisor types
(restricted to XenServers host in the same zone; this means, it was not
expecting to have multiple hypervisors types in the same zone).
Additionally, the lack of information regarding that solution
(documentation, test cases and description in PRs and Jira ticket) led the
code to be removed in #1124 after a long discussion and analysis in #1056.
That piece of code seemed logicless.  It would receive a hostId and then
change that hostId for other hostId of the zone without doing any check; it
was not even checking the hypervisor and storage in which the host was
connected to.

The problem reported in #CLOUDSTACK-9025 is caused by partial snapshots
that are taken in XenServer. This means, we do not take a complete
snapshot, but a partial one that contains only the modified data. This
requires rebuilding the VHD hierarchy when creating a template out of the
snapshot. The point is that the first hostId received is not a hostId, but
a system VM ID(SSVM). That is why the code in #1176 fixed the problem for
some deployment scenarios, but would cause problems for scenarios where we
have multiple hypervisors in the same zone. We need to execute the creation
of the VHD that represents the template in the hypervisor, so the VHD chain
can be built using the parent links.

The PR #2315 changes the behavior of
com.cloud.hypervisor.XenServerGuru.getCommandHostDelegation(long, Command).
>From now on we replace the hostId that is intended to execute the “copy
command” that will create the VHD of the template according to some
conditions that were already in place. The idea is that starting with
XenServer 6.2.0 hotFix ESP1004 we need to execute the command in the
hypervisor host and not from the SSVM. Moreover, the method was improved
making it more readable; it was also created test cases assuring that from
XenServer 6.2.0 hotFix ESP1004 and upward versions we change the hostId
that will be used to execute the “copy command”.

Furthermore, we are not selecting a random host from a zone anymore. A new
method was introduced in the HostDao object called
“findHostConnectedToSnapshotStoragePoolToExecuteCommand”, using this method
we look for a host that is in the cluster that is using the storage pool
where the volume from which the Snapshot is taken of resides. By doing
this, we guarantee that the host that is connected to the primary storage
where all of the snapshots parent VHDs are stored is used to create the
template.

Also, I opened the PR only against master. If someone needs the fix in any
other version we can cherry pick to 4.9.2.0.

On Wed, Nov 8, 2017 at 9:41 AM, Anshul Gangwar <
anshul.gang...@accelerite.com> wrote:

> Rafael, In ACS 4.5.2 you are facing issue due to XenServer version (was
> only working on xenserver 6.2 with some patches) which got fixed in 4.6
> with commit 2c4ea503f92bcf9c611f409d5cdecb42b0115b69. Now with code clean
> up in commit “8a3fd10” it’s broken. During storage refactoring changes in
> commit “8caf52c” optimization was done to send CopyCommand to Hypervisor
> resource instead of SSVM. Now with commit “8a3fd10” it’s again going to
> SSVM so it’s working fine for first snapshot but broken for linked snapshot.
>
> Regards,
> Anshul
>
> On 04/11/17, 7:20 PM, "Rafael Weingärtner" <rafaelweingart...@gmail.com>
> wrote:
>
>     Anshul, I tried with ACS 4.5.2 where commit “8a3fd10” is not present.
> This
>     is the commit where Daan and I removed some code that seemed (is? at
> least
>     for me) logicless (https://github.com/apache/cloudstack/pull/1124 and
>     https://github.com/apache/cloudstack/pull/1056).
>
>     I tested with ACS 4.5.2, which I have in a test environment. I have ACS
>     4.5.2, NFS as the primary storage, and XenServer 6.5 as hosts. I
> managed to
>     reproduce the problem as Lotic is having.
>
>     I spin up a VM, then I created a file of 2GB in this VM, and later I
> took a
>     snapshot of the root volume of it (the VM only has the root volume).
> The
>     VHD “687f86f2-1aee-4e90-a64a-3b3c28376fcd.vhd” is the first snapshot
> of the
>     root volume. If I create a template from this snapshot, everything
> works. I
>     created a new VM from the template of the first snapshot, and it
> worked,
>     and the 2GB file was there. Then, I returned for the first VM (from
> which I
>     took the snapshot), I created another 1GB file. Then I stopped the VM
> again
>     and took the second snapshot (27d37d21-13a6-488d-b2f0-
> 0ac54236b83a.vhd).
>     The interesting thing here is that the file size reduced!? And I also
>     noticed that there is no hierarchy in the VHDs files. I tried to start
> a VM
>     from a template that I created with the second snapshot, and it did not
>     work as the problem Lotic is having.
>
>     [root@xenserver-1 ~]# ls -lah
>     > /var/cloud_mount/963e9072-0fa1-3448-bf48-ffde1f522104/snapshots/2/8/
>     > total 6.9G
>     > drwxr-xr-x 2 root root 4.0K Nov  4 10:33 .
>     > drwxr-xr-x 3 root root 4.0K Nov  4 09:49 ..
>     > -rw-r--r-- 1 root root 3.2G Nov  4 10:38
>     > 27d37d21-13a6-488d-b2f0-0ac54236b83a.vhd
>     > -rw-r--r-- 1 root root 3.7G Nov  4 09:55
>     > 687f86f2-1aee-4e90-a64a-3b3c28376fcd.vhd
>     > [root@xenserver-1 ~]# vhd-util scan -f -a -p
>     > /var/cloud_mount/963e9072-0fa1-3448-bf48-ffde1f522104/
> snapshots/2/8/687f86f2-1aee-4e90-a64a-3b3c28376fcd.vhd
>     > vhd=687f86f2-1aee-4e90-a64a-3b3c28376fcd.vhd capacity=21474836480
>     > size=3916767744 hidden=0 parent=none
>     > [root@xenserver-1 ~]# vhd-util scan -f -a -p
>     > /var/cloud_mount/963e9072-0fa1-3448-bf48-ffde1f522104/
> snapshots/2/8/27d37d21-13a6-488d-b2f0-0ac54236b83a.vhd
>     > vhd=27d37d21-13a6-488d-b2f0-0ac54236b83a.vhd capacity=21474836480
>     > size=3420873216 hidden=0 parent=none
>     >
>
>
>     Was it not working way before “8a3fd10”?! I think we will need help
> from
>     more folks here to see if this is a problem that only affects XenServer
>     deployments. I will also test on my 4.9.2.0 environment again to see
> if I
>     miss anything.
>
>     On Thu, Nov 2, 2017 at 6:30 AM, Anshul Gangwar <
>     anshul.gang...@accelerite.com> wrote:
>
>     > Hi Lotic,
>     >
>     > Can you try with any release between 4.6 to 4.8 and see if this bug
> is
>     > there?
>     >
>     > Basically, any release which contains commit
> 2c4ea503f92bcf9c611f409d5cdecb42b0115b69
>     > and missing commit 8a3fd10.
>     >
>     > On 02/11/17, 1:14 AM, "Lotic Lists" <li...@lotic.com.br> wrote:
>     >
>     >     Rafael, I know and appreciate your help :)
>     >
>     >     I recognize all effort of CloudStack community.
>     >     I think all admins use snapshot to backup your virtual disks, I
> think
>     > it is not a isolated problem. Imagine a crash on your storage and
> you need
>     > convert your snapshots to templates.
>     >
>     >     Thanks
>     >     Marcelo
>     >
>     >     -----Original Message-----
>     >     From: Rafael Weingärtner [mailto:rafaelweingart...@gmail.com]
>     >     Sent: quarta-feira, 1 de novembro de 2017 17:17
>     >     To: dev@cloudstack.apache.org
>     >     Subject: Re: Strange size of template from snapshot on XenServer
>     >
>     >     Well, I could not reproduce the problem in ACS 4.9.2.0. So,
> before
>     > fixing anything, the person that might take on this should first
> find the
>     > root of the problem.
>     >
>     >     I understand what you are saying that it is a critical problem,
> but
>     > you have to understand that this is an open-source and free software.
>     >
>     >     Having said that, ALL of my work here has been pro-bono; even
> though I
>     > am a committer and PMC, I am not willing to dive deeper into this
> issue now
>     > as it does not affect me. It does not mean that I would not help
> people if
>     > I have time, as you can notice whenever I have some free time I try
> to be
>     > around and guide or check things for other people in this list.
>     >
>     >     If this is a huge problem for you right now, I suggest you
> looking for
>     > companies that provide enterprise support for ACS.
>     >
>     >     On Wed, Nov 1, 2017 at 7:35 PM, Lotic Lists <li...@lotic.com.br>
>     > wrote:
>     >
>     >     > Hi Rafael
>     >     >
>     >     > Tested on ACS 4.5.2.2, 4.9.2.0, 4.9.3.0 and 4.10. All versions
> with
>     >     > same issue
>     >     >
>     >     > Description and steps to reproduce
>     >     > https://issues.apache.org/jira/browse/CLOUDSTACK-10128
>     >     >
>     >     > Again, it is a critical problem for production environments.
>     >     >
>     >     > Regards
>     >     > Marcelo
>     >     >
>     >     > -----Original Message-----
>     >     > From: Rafael Weingärtner [mailto:rafaelweingart...@gmail.com]
>     >     > Sent: terça-feira, 31 de outubro de 2017 17:10
>     >     > To: dev@cloudstack.apache.org
>     >     > Subject: Re: Strange size of template from snapshot on
> XenServer
>     >     >
>     >     > Yes, it is a big problem. I have not checked the code that
> creates
>     > the
>     >     > template, but I am not sure if the problem you are
> experiencing is
>     >     > related to 9025. It might be a problem in some other place.
>     >     >
>     >     > My suggestion is for you to create a Jira ticket detailing the
>     >     > situation and wait until someone fixes it. Have you tested on
> ACS
>     >     > 4.10? If 4.9.2.0 works, it might be something that went only
> into
>     > 4.9.3.0.
>     >     >
>     >     > On Tue, Oct 31, 2017 at 8:04 PM, Lotic Lists <
> li...@lotic.com.br>
>     > wrote:
>     >     >
>     >     > > I tested on a fresh environment, acs 4.9.3.0, xenserver 6.5
> with
>     > all
>     >     > > patches and primary storage NFS, we have same problem
>     >     > >
>     >     > > - Snapshot files
>     >     > > [root@acs01 203]# ls -lh /exports/secondary/snapshots/2/6/
> total
>     >     > > 2.8G
>     >     > > -rw-r--r-- 1 root root 1.7G Oct 31 16:48
>     >     > > cad96f0c-de49-4255-b0a0-9aea5e2297cb.vhd
>     >     > > <-- snap-base
>     >     > > -rw-r--r-- 1 root root 1.1G Oct 31 16:49
>     >     > > fa4e7e4d-531d-42dd-8a80-df6b274584a8.vhd
>     >     > > <-- snap-diff
>     >     > > https://i.imgur.com/bWupsAS.png
>     >     > >
>     >     > > [root@acs01 203]# md5sum /exports/secondary/snapshots/
>     >     > > 2/6/fa4e7e4d-531d-42dd-8a80-df6b274584a8.vhd
>     >     > > 17eec8d4f6d4c128b34b9e2e1876ceb7
> /exports/secondary/snapshots/
>     >     > > 2/6/fa4e7e4d-531d-42dd-8a80-df6b274584a8.vhd
>     >     > >
>     >     > > - Template files
>     >     > > [root@acs01 203]# ls -lh /exports/secondary/template/
> tmpl/2/203
>     >     > > total 2.0G
>     >     > > -rw-r--r-- 1 root root 1.1G Oct 31 16:50
> e9def45d-1b9e-4c2e-b843-
>     >     > > 852fe40f00b2.vhd
>     >     > > -rw-r--r-- 1 root root  303 Oct 31 16:50 template.properties
>     >     > >
>     >     > > [root@acs01 203]# md5sum /exports/secondary/template/
>     >     > > tmpl/2/203/e9def45d-1b9e-4c2e-b843-852fe40f00b2.vhd
>     >     > > 17eec8d4f6d4c128b34b9e2e1876ceb7
> /exports/secondary/template/
>     >     > > tmpl/2/203/e9def45d-1b9e-4c2e-b843-852fe40f00b2.vhd
>     >     > >
>     >     > > It's a huge problem!!! I think issue 9025 is not resolved
>     >     > >
>     >     > > Regards
>     >     > > Marcelo
>     >     > >
>     >     > > -----Original Message-----
>     >     > > From: Rafael Weingärtner [mailto:rafaelweingart...@gmail.com
> ]
>     >     > > Sent: terça-feira, 31 de outubro de 2017 16:00
>     >     > > To: us...@cloudstack.apache.org
>     >     > > Subject: Re: Strange size of template from snapshot on
> XenServer
>     >     > >
>     >     > > Well, reading the description of the issue it seems to be
> related
>     > to
>     >     > > the problem you are describing. However, I compared the
> classes
>     > that
>     >     > > were changed with PR
>     >     > > (https://github.com/apache/cloudstack/pull/1176)
>     >     > > and they are the same in ACS 4.9.2.0 and 4.9.3.0.
>     >     > >
>     >     > > On Tue, Oct 31, 2017 at 6:36 PM, Lotic Lists <
> li...@lotic.com.br>
>     > wrote:
>     >     > >
>     >     > > > Hi Rafael, thanks for test.
>     >     > > >
>     >     > > > I found the issue
>     >     > > > https://issues.apache.org/jira/browse/CLOUDSTACK-9025
>     >     > > > I think is a same case. I will up a new lab environment
> now for
>     >     > > > simulate the issue
>     >     > > >
>     >     > > > Regards.
>     >     > > > Marcelo
>     >     > > >
>     >     > > > -----Original Message-----
>     >     > > > From: Rafael Weingärtner [mailto:rafaelweingartner@
> gmail.com]
>     >     > > > Sent: segunda-feira, 30 de outubro de 2017 12:08
>     >     > > > To: us...@cloudstack.apache.org
>     >     > > > Subject: Re: Strange size of template from snapshot on
> XenServer
>     >     > > >
>     >     > > > I just did, and the size of the template is the size of
> the root
>     >     > > > disk
>     >     > > > + the 1GB file I created in the test vm.
>     >     > > >
>     >     > > > The system I tested this is an ACS 4.9.2, XenServer 6.5,
> Primary
>     >     > > > Stg ISCSI
>     >     > > >
>     >     > > > On Mon, Oct 30, 2017 at 10:20 AM, Lotic Lists <
>     > li...@lotic.com.br>
>     >     > > wrote:
>     >     > > >
>     >     > > > > Good morning
>     >     > > > >
>     >     > > > > Guys, who could execute the test below?
>     >     > > > >
>     >     > > > > 1. Create a manual snapshot of a volume 2. Create a file
> (1GB)
>     >     > > > > with dd on VM 3. Create second snapshot of a volume 4.
> Convert
>     >     > > > > latest snapshot to a template 5. Verify the size of vhd
> of new
>     >     > > > > template in secondary storage.
>     >     > > > >
>     >     > > > > Here the template have 1GB, exactly the size of latest
>     > snapshot.
>     >     > > > >
>     >     > > > > ACS 4.9.3, XenServer 6.5, Primary Stg ISCSI
>     >     > > > >
>     >     > > > >
>     >     > > > > -----Original Message-----
>     >     > > > > From: Lotic Lists [mailto:li...@lotic.com.br]
>     >     > > > > Sent: quinta-feira, 26 de outubro de 2017 15:03
>     >     > > > > To: us...@cloudstack.apache.org
>     >     > > > > Subject: RE: Strange size of template from snapshot on
>     > XenServer
>     >     > > > >
>     >     > > > > I test the same case on other environment, create
> template from
>     >     > > > > snapshot, acs 4.9.2.0 and XenServer 6.5/iscsi
>     >     > > > >
>     >     > > > > 1. Was created two snapshots
>     >     > > > >
>     >     > > > > # ls -lh /var/cloud_mount/e5921d9a-
>     > 70b0-3501-9fda-6097e721fffb/
>     >     > > > > snapshots/131/617/
>     >     > > > > total 3.0G
>     >     > > > > -rw-r--r-- 1 4294967294 root  13G Oct 26 13:46
>     >     > > > > 42c7c51d-4aa7-40d2-b5eb- cb56fc963974.vhd
>     >     > > > > -rw-r--r-- 1 4294967294 root 1.2G Oct 26 14:42
>     >     > > > > 53b33a5a-a7d9-4efa-b126- 38eec5482b05.vhd
>     >     > > > >
>     >     > > > > https://i.imgur.com/cSHZAJ3.png
>     >     > > > >
>     >     > > > > 2. create a template from latest snapshot
>     >     > > > >
>     >     > > > > # ls -lh
>     >     > > > > total 37M
>     >     > > > > -rw-r--r-- 1 4294967294 root 1.2G Oct 26 14:43
>     >     > > > > 637604a6-3085-444e-ba09- b9aab11a2c16.vhd
>     >     > > > > -rw-r--r-- 1 4294967294 root  303 Oct 26 14:43
>     >     > > > > template.properties
>     >     > > > >
>     >     > > > > # cat template.properties
>     >     > > > > #Thu Oct 26 16:43:54 UTC 2017
>     >     > > > > filename=637604a6-3085-444e-ba09-b9aab11a2c16.vhd
>     >     > > > > id=1
>     >     > > > > vhd=true
>     >     > > > > vhd.filename=637604a6-3085-444e-ba09-b9aab11a2c16.vhd
>     >     > > > > public=true
>     >     > > > > uniquename=637604a6-3085-444e-ba09-b9aab11a2c16
>     >     > > > > vhd.virtualsize=21474836480
>     >     > > > > virtualsize=21474836480
>     >     > > > > hvm=
>     >     > > > > vhd.size=1197752832
>     >     > > > > size=1197752832
>     >     > > > >
>     >     > > > > # md5sum /var/cloud_mount/e5921d9a-
>     > 70b0-3501-9fda-6097e721fffb/
>     >     > > > > snapshots/131/617/53b33a5a-a7d9-4efa-b126-38eec5482b05.
> vhd
>     >     > > > > 9d2078a6f41ca5e1eab824848c06df53
> /var/cloud_mount/e5921d9a-
>     >     > > > > 70b0-3501-9fda-6097e721fffb/snapshots/131/617/53b33a5a-
>     >     > > > > a7d9-4efa-b126-38eec5482b05.vhd
>     >     > > > >
>     >     > > > > # md5sum /var/cloud_mount/e5921d9a-
>     > 70b0-3501-9fda-6097e721fffb/
>     >     > > > > template/tmpl/2/294/637604a6-
> 3085-444e-ba09-b9aab11a2c16.vhd
>     >     > > > > 9d2078a6f41ca5e1eab824848c06df53
> /var/cloud_mount/e5921d9a-
>     >     > > > > 70b0-3501-9fda-6097e721fffb/
> template/tmpl/2/294/637604a6-
>     >     > > > > 3085-444e-ba09-b9aab11a2c16.vhd
>     >     > > > >
>     >     > > > > ACS not merge files of snapshots in a single file for
> template,
>     >     > > > > I think it's a bug.
>     >     > > > >
>     >     > > > >
>     >     > > > > -----Original Message-----
>     >     > > > > From: Rafael Weingärtner [mailto:rafaelweingartner@
> gmail.com]
>     >     > > > > Sent: quarta-feira, 25 de outubro de 2017 22:51
>     >     > > > > To: us...@cloudstack.apache.org
>     >     > > > > Subject: Re: Strange size of template from snapshot on
>     > XenServer
>     >     > > > >
>     >     > > > > It is easy to test, get one of these files and try to
> use them
>     >     > > > > in Virtualbox. If they run, they do not require anything
> else.
>     >     > > > >
>     >     > > > > On Wed, Oct 25, 2017 at 10:49 PM, Lotic Lists
>     >     > > > > <li...@lotic.com.br>
>     >     > > > wrote:
>     >     > > > >
>     >     > > > > > Sure but files on secondary storage don't have
> parents, I
>     >     > > > > > don't know if ACS can connect standalone files.
>     >     > > > > >
>     >     > > > > > vhd-util scan -f -m'*.vhd' -p
>     >     > > > > > vhd=14818597-55bb-49be-9ace-761e9e01c074.vhd
>     >     > > > > > capacity=311385128960
>     >     > > > > > size=304124744192 hidden=0 parent=none
>     >     > > > > > vhd=bbd9e9b5-53fb-44a2-afbe-f262396a2d84.vhd
>     >     > > > > > capacity=311385128960
>     >     > > > > > size=17770869248 hidden=0 parent=none
>     >     > > > > > vhd=3b90bbb2-7ce5-41f6-9f7e-fd0ea061cb2d.vhd
>     >     > > > > > capacity=311385128960
>     >     > > > > > size=52199817728 hidden=0 parent=none
>     >     > > > > > vhd=4b450217-c9ad-45a2-946b-de3cb323469b.vhd
>     >     > > > > > capacity=311385128960
>     >     > > > > > size=51615670784 hidden=0 parent=none
>     >     > > > > >
>     >     > > > > >
>     >     > > > > > -----Original Message-----
>     >     > > > > > From: Rafael Weingärtner [mailto:rafaelweingartner@
> gmail.com
>     > ]
>     >     > > > > > Sent: quarta-feira, 25 de outubro de 2017 22:35
>     >     > > > > > To: us...@cloudstack.apache.org
>     >     > > > > > Subject: Re: Strange size of template from snapshot on
>     >     > > > > > XenServer
>     >     > > > > >
>     >     > > > > > If it was not executing coalesce you would see the
> parent
>     >     > > > > > reference when you listed the hierarchy, right?
>     >     > > > > >
>     >     > > > > > On Wed, Oct 25, 2017 at 10:25 PM, Lotic Lists
>     >     > > > > > <li...@lotic.com.br>
>     >     > > > > wrote:
>     >     > > > > >
>     >     > > > > > > I think ACS preserve all VHDs for coalesce but
> coalesce not
>     >     > > > > > > occur when create a template or before starting next
>     >     > > > > > > snapshot
>     >     > > scheduled.
>     >     > > > > > > If I create a template from any snapshots (2)
> showing in
>     >     > > > > > > GUI, the template have 49GB. 49 GB is the size of
> the 2
>     >     > > > > > > latest files on secondary
>     >     > > > > > storage.
>     >     > > > > > >
>     >     > > > > > > What do you think? Bug?
>     >     > > > > > >
>     >     > > > > > >
>     >     > > > > > > -----Original Message-----
>     >     > > > > > > From: Rafael Weingärtner
>     >     > > > > > > [mailto:rafaelweingart...@gmail.com]
>     >     > > > > > > Sent: quarta-feira, 25 de outubro de 2017 22:07
>     >     > > > > > > To: us...@cloudstack.apache.org
>     >     > > > > > > Subject: Re: Strange size of template from snapshot
> on
>     >     > > > > > > XenServer
>     >     > > > > > >
>     >     > > > > > > Aha, The two destroyed entries are showing " removed:
>     > NULL".
>     >     > > > > > >
>     >     > > > > > > In ACS one thing is the "destroying" of a resource,
> another
>     >     > > > > > > is the removal of this resource from the system. I
> believe
>     >     > > > > > > there is something to do with the expunge interval,
> but I
>     > am
>     >     > > > > > > not sure if the snapshots complete removal also
> happens
>     >     > > > > > > within the expunge interval of user VMs or if it is
>     >     > > > > > > configured by something
>     >     > else.
>     >     > > > > > >
>     >     > > > > > >
>     >     > > > > > > On Wed, Oct 25, 2017 at 10:03 PM, Lotic Lists
>     >     > > > > > > <li...@lotic.com.br>
>     >     > > > > > wrote:
>     >     > > > > > >
>     >     > > > > > > > I not start manual snapshot, just scheduled with
> keep=2
>     >     > > > > > > >
>     >     > > > > > > > GUI showing 2 snapshots, database showing 4
> entries, 2
>     >     > > > > > > > Destroyed and
>     >     > > > > > > > 2 BackedUp
>     >     > > > > > > >
>     >     > > > > > > > *************************** 132. row
>     >     > ***************************
>     >     > > > > > > >               id: 6380
>     >     > > > > > > >   data_center_id: 2
>     >     > > > > > > >       account_id: 64
>     >     > > > > > > >        domain_id: 49
>     >     > > > > > > >        volume_id: 2102
>     >     > > > > > > > disk_offering_id: 230
>     >     > > > > > > >           status: Destroyed
>     >     > > > > > > >             path: NULL
>     >     > > > > > > >             name: FullCRM_ROOT-542_20171022030205
>     >     > > > > > > >             uuid: 1294cdb7-01bf-4e76-b4c0-
> 11b3c6b395e9
>     >     > > > > > > >    snapshot_type: 4
>     >     > > > > > > > type_description: DAILY
>     >     > > > > > > >             size: 311385128960
>     >     > > > > > > >          created: 2017-10-22 03:02:05
>     >     > > > > > > >          removed: NULL
>     >     > > > > > > >   backup_snap_id: NULL
>     >     > > > > > > >         swift_id: NULL
>     >     > > > > > > >       sechost_id: NULL
>     >     > > > > > > >     prev_snap_id: NULL
>     >     > > > > > > >  hypervisor_type: XenServer
>     >     > > > > > > >          version: 2.2
>     >     > > > > > > >            s3_id: NULL
>     >     > > > > > > >         min_iops: NULL
>     >     > > > > > > >         max_iops: NULL
>     >     > > > > > > > *************************** 133. row
>     >     > ***************************
>     >     > > > > > > >               id: 6416
>     >     > > > > > > >   data_center_id: 2
>     >     > > > > > > >       account_id: 64
>     >     > > > > > > >        domain_id: 49
>     >     > > > > > > >        volume_id: 2102
>     >     > > > > > > > disk_offering_id: 230
>     >     > > > > > > >           status: Destroyed
>     >     > > > > > > >             path: NULL
>     >     > > > > > > >             name: FullCRM_ROOT-542_20171023030205
>     >     > > > > > > >             uuid: 0a8067a3-ad15-40b6-8b16-
> c1881d7d9f41
>     >     > > > > > > >    snapshot_type: 4
>     >     > > > > > > > type_description: DAILY
>     >     > > > > > > >             size: 311385128960
>     >     > > > > > > >          created: 2017-10-23 03:02:05
>     >     > > > > > > >          removed: NULL
>     >     > > > > > > >   backup_snap_id: NULL
>     >     > > > > > > >         swift_id: NULL
>     >     > > > > > > >       sechost_id: NULL
>     >     > > > > > > >     prev_snap_id: NULL
>     >     > > > > > > >  hypervisor_type: XenServer
>     >     > > > > > > >          version: 2.2
>     >     > > > > > > >            s3_id: NULL
>     >     > > > > > > >         min_iops: NULL
>     >     > > > > > > >         max_iops: NULL
>     >     > > > > > > > *************************** 134. row
>     >     > ***************************
>     >     > > > > > > >               id: 6455
>     >     > > > > > > >   data_center_id: 2
>     >     > > > > > > >       account_id: 64
>     >     > > > > > > >        domain_id: 49
>     >     > > > > > > >        volume_id: 2102
>     >     > > > > > > > disk_offering_id: 230
>     >     > > > > > > >           status: BackedUp
>     >     > > > > > > >             path: NULL
>     >     > > > > > > >             name: FullCRM_ROOT-542_20171024030207
>     >     > > > > > > >             uuid: 1ca646e5-a155-4a59-aa31-
> 5889eec7536f
>     >     > > > > > > >    snapshot_type: 4
>     >     > > > > > > > type_description: DAILY
>     >     > > > > > > >             size: 311385128960
>     >     > > > > > > >          created: 2017-10-24 03:02:07
>     >     > > > > > > >          removed: NULL
>     >     > > > > > > >   backup_snap_id: NULL
>     >     > > > > > > >         swift_id: NULL
>     >     > > > > > > >       sechost_id: NULL
>     >     > > > > > > >     prev_snap_id: NULL
>     >     > > > > > > >  hypervisor_type: XenServer
>     >     > > > > > > >          version: 2.2
>     >     > > > > > > >            s3_id: NULL
>     >     > > > > > > >         min_iops: NULL
>     >     > > > > > > >         max_iops: NULL
>     >     > > > > > > > *************************** 135. row
>     >     > ***************************
>     >     > > > > > > >               id: 6497
>     >     > > > > > > >   data_center_id: 2
>     >     > > > > > > >       account_id: 64
>     >     > > > > > > >        domain_id: 49
>     >     > > > > > > >        volume_id: 2102
>     >     > > > > > > > disk_offering_id: 230
>     >     > > > > > > >           status: BackedUp
>     >     > > > > > > >             path: NULL
>     >     > > > > > > >             name: FullCRM_ROOT-542_20171025030208
>     >     > > > > > > >             uuid: 759d61dc-e3db-4815-b68b-
> b90904ccbda5
>     >     > > > > > > >    snapshot_type: 4
>     >     > > > > > > > type_description: DAILY
>     >     > > > > > > >             size: 311385128960
>     >     > > > > > > >          created: 2017-10-25 03:02:08
>     >     > > > > > > >          removed: NULL
>     >     > > > > > > >   backup_snap_id: NULL
>     >     > > > > > > >         swift_id: NULL
>     >     > > > > > > >       sechost_id: NULL
>     >     > > > > > > >     prev_snap_id: NULL
>     >     > > > > > > >  hypervisor_type: XenServer
>     >     > > > > > > >          version: 2.2
>     >     > > > > > > >            s3_id: NULL
>     >     > > > > > > >         min_iops: NULL
>     >     > > > > > > >         max_iops: NULL
>     >     > > > > > > >
>     >     > > > > > > >
>     >     > > > > > > >
>     >     > > > > > > > -----Original Message-----
>     >     > > > > > > > From: Rafael Weingärtner
>     >     > > > > > > > [mailto:rafaelweingart...@gmail.com]
>     >     > > > > > > > Sent: quarta-feira, 25 de outubro de 2017 21:50
>     >     > > > > > > > To: us...@cloudstack.apache.org
>     >     > > > > > > > Subject: Re: Strange size of template from
> snapshot on
>     >     > > > > > > > XenServer
>     >     > > > > > > >
>     >     > > > > > > > The file "14818597-55bb-49be-9ace-
> 761e9e01c074.vhd"
>     > seems
>     >     > > > > > > > more like a manual snapshot. It was created 2
> hours after
>     >     > > > > > > > the
>     >     > "normal"
>     >     > > > > > > > time. I would check these snapshots in ACS and see
> how
>     >     > > > > > > > they are
>     >     > > > > presented.
>     >     > > > > > > > BTW, how many snapshots is ACS showing for this
> volume? I
>     >     > > > > > > > would also check the "snapshots" table for the
> volume you
>     >     > > > > > > > have configured
>     >     > > > > them.
>     >     > > > > > > > select * from snapshots where volume_id = ?
>     >     > > > > > > >
>     >     > > > > > > > you may also filter for removed:
>     >     > > > > > > > select * from snapshots where volume_id = ? and
> removed
>     > is
>     >     > > > > > > > null
>     >     > > > > > > >
>     >     > > > > > > > On Wed, Oct 25, 2017 at 9:39 PM, Lotic Lists
>     >     > > > > > > > <li...@lotic.com.br>
>     >     > > > > > wrote:
>     >     > > > > > > >
>     >     > > > > > > > > Hi Rafael, I check
>     >     > > > > > > > >
>     >     > > > > > > > > # ls -lrt
>     >     > > > > > > > > total 285368382
>     >     > > > > > > > > -rw-r--r-- 1 root root 304124744192 Oct 22 03:28
>     >     > > > > > > > > 14818597-55bb-49be-9ace- 761e9e01c074.vhd
>     >     > > > > > > > > -rw-r--r-- 1 root root  17770869248 Oct 23 01:10
>     >     > > > > > > > > bbd9e9b5-53fb-44a2-afbe- f262396a2d84.vhd
>     >     > > > > > > > > -rw-r--r-- 1 root root  52199817728 Oct 24 01:30
>     >     > > > > > > > > 3b90bbb2-7ce5-41f6-9f7e- fd0ea061cb2d.vhd
>     >     > > > > > > > > -rw-r--r-- 1 root root  51615670784 Oct 25 01:28
>     >     > > > > > > > > 4b450217-c9ad-45a2-946b- de3cb323469b.vhd
>     >     > > > > > > > >
>     >     > > > > > > > > # vhd-util scan -f -m'*.vhd' -p
>     >     > > > > > > > > vhd=14818597-55bb-49be-9ace-761e9e01c074.vhd
>     >     > > > > > > > > capacity=311385128960
>     >     > > > > > > > > size=304124744192 hidden=0 parent=none
>     >     > > > > > > > > vhd=bbd9e9b5-53fb-44a2-afbe-f262396a2d84.vhd
>     >     > > > > > > > > capacity=311385128960
>     >     > > > > > > > > size=17770869248 hidden=0 parent=none
>     >     > > > > > > > > vhd=3b90bbb2-7ce5-41f6-9f7e-fd0ea061cb2d.vhd
>     >     > > > > > > > > capacity=311385128960
>     >     > > > > > > > > size=52199817728 hidden=0 parent=none
>     >     > > > > > > > > vhd=4b450217-c9ad-45a2-946b-de3cb323469b.vhd
>     >     > > > > > > > > capacity=311385128960
>     >     > > > > > > > > size=51615670784 hidden=0 parent=none
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > > -----Original Message-----
>     >     > > > > > > > > From: Rafael Weingärtner
>     >     > > > > > > > > [mailto:rafaelweingart...@gmail.com]
>     >     > > > > > > > > Sent: quarta-feira, 25 de outubro de 2017 20:44
>     >     > > > > > > > > To: us...@cloudstack.apache.org
>     >     > > > > > > > > Subject: Re: Strange size of template from
> snapshot on
>     >     > > > > > > > > XenServer
>     >     > > > > > > > >
>     >     > > > > > > > > Did you check the hierarchy of those snapshot
> files in
>     >     > > > > > > > > your secondary storage?
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > > On Wed, Oct 25, 2017 at 7:44 PM, Lotic Lists
>     >     > > > > > > > > <li...@lotic.com.br>
>     >     > > > > > > wrote:
>     >     > > > > > > > >
>     >     > > > > > > > > > Hi all
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > > Anyone identify problems with templates from
>     > snapshots
>     >     > > > > > > > > > on
>     >     > > > > > XenServer?
>     >     > > > > > > > > >
>     >     > > > > > > > > > I created a recurrent snapshot, first VHD on
>     > secondary
>     >     > > > > > > > > > storage have a similar size of volume. If I
> create a
>     >     > > > > > > > > > template from latest snapshot, the size of VHD
> of
>     >     > > > > > > > > > template is smaller equal from last snapshot,
> I think
>     >     > > > > > > > > > CloudStack/Xenserver not coalesce VHD files of
>     >     > > > > > > > > > secondary storage for create the template
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > > I keep two recurrent snapshots, on secondary
> storage
>     >     > > > > > > > > > have four
>     >     > > > > > files.
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > > Environment is ACS 4.9.3.0 and XenServer 6.5,
> primary
>     >     > > > > > > > > > storage is PreSetup with ISCSI.
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > > Thanks
>     >     > > > > > > > > >
>     >     > > > > > > > > > Marcelo
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > > > --
>     >     > > > > > > > > Rafael Weingärtner
>     >     > > > > > > > >
>     >     > > > > > > > >
>     >     > > > > > > >
>     >     > > > > > > >
>     >     > > > > > > > --
>     >     > > > > > > > Rafael Weingärtner
>     >     > > > > > > >
>     >     > > > > > > >
>     >     > > > > > >
>     >     > > > > > >
>     >     > > > > > > --
>     >     > > > > > > Rafael Weingärtner
>     >     > > > > > >
>     >     > > > > > >
>     >     > > > > >
>     >     > > > > >
>     >     > > > > > --
>     >     > > > > > Rafael Weingärtner
>     >     > > > > >
>     >     > > > > >
>     >     > > > >
>     >     > > > >
>     >     > > > > --
>     >     > > > > Rafael Weingärtner
>     >     > > > >
>     >     > > > >
>     >     > > > >
>     >     > > >
>     >     > > >
>     >     > > > --
>     >     > > > Rafael Weingärtner
>     >     > > >
>     >     > > >
>     >     > >
>     >     > >
>     >     > > --
>     >     > > Rafael Weingärtner
>     >     > >
>     >     > >
>     >     >
>     >     >
>     >     > --
>     >     > Rafael Weingärtner
>     >     >
>     >     >
>     >
>     >
>     >     --
>     >     Rafael Weingärtner
>     >
>     >
>     >
>     > DISCLAIMER
>     > ==========
>     > This e-mail may contain privileged and confidential information
> which is
>     > the property of Accelerite, a Persistent Systems business. It is
> intended
>     > only for the use of the individual or entity to which it is
> addressed. If
>     > you are not the intended recipient, you are not authorized to read,
> retain,
>     > copy, print, distribute or use this message. If you have received
> this
>     > communication in error, please notify the sender and delete all
> copies of
>     > this message. Accelerite, a Persistent Systems business does not
> accept any
>     > liability for virus infected mails.
>     >
>
>
>
>     --
>     Rafael Weingärtner
>
>
> DISCLAIMER
> ==========
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>



-- 
Rafael Weingärtner

Reply via email to