Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Daan Hoogland
jevgeni,
for sure this should have happened during the upgrade. If you still have
the logs from that day you might find the error there. a spelling error in
name or desription might have happened, for instance. anyway, congrats on
solving it.

On Fri, Jul 20, 2018 at 2:45 PM, Jevgeni Zolotarjov 
wrote:

> Yes,
>
> But isn't it what is written here
> http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/en/4.11.1.0/upgrade/upgrade-4.11.html
> ?
>
> On Fri, Jul 20, 2018 at 3:29 PM Makrand  wrote:
>
> > I think you must have tried to register system VM template from template
> > menu from the left side manually. It will be registered as USER only in
> > that case.
> >
> > --
> > Makrand
> >
> >
> > On Fri, Jul 20, 2018 at 5:41 PM, Jevgeni Zolotarjov <
> > j.zolotar...@gmail.com>
> > wrote:
> >
> > > Eventually I fixed the problem, without clear understanding of the root
> > > cause.
> > >
> > > I destroyed routerVM, and cloudstack recreated it. But I discovered,
> that
> > > it is version 4.11.0. Not 4.11.1!
> > > I checked and systemVM template for 4.11.1 is registered in cloudstack
> -
> > > all OK.
> > > I noticed however, that its "Type" property is USER, not SYSTEM.
> > >
> > > Then I wanted to delete template for 4.11.0, but cloudstack does not
> > offer
> > > me this option. I can only add new ones.
> > >
> > > So, I ended with manipulations in the DB itself to make template for
> > 4.11.1
> > > the only one in the system and have Type = SYSTEM.
> > >
> > > After that I destroyed again routerVM. It was recreated and it is
> 4.11.1.
> > > And now everything works fine for over an hour already.
> > >
> > > I hope, thats it.
> > >
> > > On Fri, Jul 20, 2018 at 12:10 PM ilya musayev <
> > > ilya.mailing.li...@gmail.com>
> > > wrote:
> > >
> > > > Have you tried destroying router vm and let CloudStack create new
> one ?
> > > >
> > > > On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov <
> > > j.zolotar...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > - an ip-address conflict.
> > > > >   JZ: unlikely, but not impossible. I tried to restart router VM in
> > > > > Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> > > > > While rebooting ping to this router VM disappeared. Hence, no other
> > > > device
> > > > > is using the same IP.
> > > > > But!!! when this virtual router started, then network connection to
> > all
> > > > > guest VMs disappeared. So, it must be something with this virtual
> > > router.
> > > > >
> > > > > - flakey hardware being one of
> > > > > -+ if card in the host
> > > > > JZ: higly unlikely
> > > > >
> > > > > -+ a router with bad firmware
> > > > > JZ: also unlikely
> > > > >
> > > > > - of course a strange cofiguration of the software router in you
> host
> > > > might
> > > > > be the issue as well
> > > > > JZ: I didnt do any special configuration. Just used default.
> > > > >
> > > > > by all I know this happening after upgrade sounds like an unhappy
> > > > incident
> > > > > but can't be sure.
> > > > > The iptables restart, was this on the VirtualRouter or on the host,
> > or
> > > > > maybe on the guest? and the restart network?
> > > > >
> > > > > JZ: iptables restart on host machine. (or network restart on host)
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland <
> > > daan.hoogl...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > that behaviour sound familiar from a couple of cases:
> > > > > > - an ip-address conflict.
> > > > > > - flakey hardware being one of
> > > > > > -+ if card in the host
> > > > > > -+ a router with bad firmware
> > > > > > - of course a strange cofiguration of the software router in you
> > host
> > > > > might
> > > > > > be the issue as well
> > > > > >
> > > > > > by all I know this happening after upgrade sounds like an unhappy
> > > > > incident
> > > > > > but can't be sure.
> > > > > > The iptables restart, was this on the VirtualRouter or on the
> host,
> > > or
> > > > > > maybe on the guest? and the restart network?
> > > > > >
> > > > > >
> > > > > > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > > > > > j.zolotar...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > I updated cloudstack 4.11.0 -> 4.11.1
> > > > > > >
> > > > > > > Everything went OK during update, but after host reboot guest
> VMs
> > > > lost
> > > > > > > connection after few minutes of normal work.
> > > > > > > I tried restarting network - systemctl restart network.service
> > > > > > > then connection was restored again for few minutes
> > > > > > >
> > > > > > > Finally I could restore connection by restarting iptables -
> > > systemctl
> > > > > > > restart iptables.service
> > > > > > >
> > > > > > > But then again guest VMs lost connection after few minutes of
> > > normal
> > > > > > > operation.
> > > > > > > The time of normal operation can be 5 minutes, but sometimes up
> > to
> > > 40
> > > > > > > minutes.
> > > > > > >
> > > > > > > Please help me to 

VPC static NAT with Private gateways

2018-07-20 Thread Adam Witwicki
Hello

If we add a static nat to a server in a VPC, after a while it is unable to 
route to the Private gateways assigned in the same VPC.
Has anyone else seen this?

Thanks

Adam



Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have checked 
this e-mail and any attachments for viruses, we can not guarantee that they are 
virus-free. You must therefore take full responsibility for virus checking.
This message and any attachments are confidential and should only be read by 
those to whom they are addressed. If you are not the intended recipient, please 
contact us, delete the message from your computer and destroy any copies. Any 
distribution or copying without our prior permission is prohibited.
Internet communications are not always secure and therefore Oakford Technology 
Limited does not accept legal responsibility for this message. The recipient is 
responsible for verifying its authenticity before acting on the contents. Any 
views or opinions presented are solely those of the author and do not 
necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, 10 Prince Maurice Court, 
Devizes, Wiltshire. SN10 2RT.
Registered in England and Wales No. 5971519



Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Jevgeni Zolotarjov
Yes,

But isn't it what is written here
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.1.0/upgrade/upgrade-4.11.html
?

On Fri, Jul 20, 2018 at 3:29 PM Makrand  wrote:

> I think you must have tried to register system VM template from template
> menu from the left side manually. It will be registered as USER only in
> that case.
>
> --
> Makrand
>
>
> On Fri, Jul 20, 2018 at 5:41 PM, Jevgeni Zolotarjov <
> j.zolotar...@gmail.com>
> wrote:
>
> > Eventually I fixed the problem, without clear understanding of the root
> > cause.
> >
> > I destroyed routerVM, and cloudstack recreated it. But I discovered, that
> > it is version 4.11.0. Not 4.11.1!
> > I checked and systemVM template for 4.11.1 is registered in cloudstack -
> > all OK.
> > I noticed however, that its "Type" property is USER, not SYSTEM.
> >
> > Then I wanted to delete template for 4.11.0, but cloudstack does not
> offer
> > me this option. I can only add new ones.
> >
> > So, I ended with manipulations in the DB itself to make template for
> 4.11.1
> > the only one in the system and have Type = SYSTEM.
> >
> > After that I destroyed again routerVM. It was recreated and it is 4.11.1.
> > And now everything works fine for over an hour already.
> >
> > I hope, thats it.
> >
> > On Fri, Jul 20, 2018 at 12:10 PM ilya musayev <
> > ilya.mailing.li...@gmail.com>
> > wrote:
> >
> > > Have you tried destroying router vm and let CloudStack create new one ?
> > >
> > > On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov <
> > j.zolotar...@gmail.com
> > > >
> > > wrote:
> > >
> > > > - an ip-address conflict.
> > > >   JZ: unlikely, but not impossible. I tried to restart router VM in
> > > > Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> > > > While rebooting ping to this router VM disappeared. Hence, no other
> > > device
> > > > is using the same IP.
> > > > But!!! when this virtual router started, then network connection to
> all
> > > > guest VMs disappeared. So, it must be something with this virtual
> > router.
> > > >
> > > > - flakey hardware being one of
> > > > -+ if card in the host
> > > > JZ: higly unlikely
> > > >
> > > > -+ a router with bad firmware
> > > > JZ: also unlikely
> > > >
> > > > - of course a strange cofiguration of the software router in you host
> > > might
> > > > be the issue as well
> > > > JZ: I didnt do any special configuration. Just used default.
> > > >
> > > > by all I know this happening after upgrade sounds like an unhappy
> > > incident
> > > > but can't be sure.
> > > > The iptables restart, was this on the VirtualRouter or on the host,
> or
> > > > maybe on the guest? and the restart network?
> > > >
> > > > JZ: iptables restart on host machine. (or network restart on host)
> > > >
> > > >
> > > >
> > > > On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland <
> > daan.hoogl...@gmail.com>
> > > > wrote:
> > > >
> > > > > that behaviour sound familiar from a couple of cases:
> > > > > - an ip-address conflict.
> > > > > - flakey hardware being one of
> > > > > -+ if card in the host
> > > > > -+ a router with bad firmware
> > > > > - of course a strange cofiguration of the software router in you
> host
> > > > might
> > > > > be the issue as well
> > > > >
> > > > > by all I know this happening after upgrade sounds like an unhappy
> > > > incident
> > > > > but can't be sure.
> > > > > The iptables restart, was this on the VirtualRouter or on the host,
> > or
> > > > > maybe on the guest? and the restart network?
> > > > >
> > > > >
> > > > > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > > > > j.zolotar...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > I updated cloudstack 4.11.0 -> 4.11.1
> > > > > >
> > > > > > Everything went OK during update, but after host reboot guest VMs
> > > lost
> > > > > > connection after few minutes of normal work.
> > > > > > I tried restarting network - systemctl restart network.service
> > > > > > then connection was restored again for few minutes
> > > > > >
> > > > > > Finally I could restore connection by restarting iptables -
> > systemctl
> > > > > > restart iptables.service
> > > > > >
> > > > > > But then again guest VMs lost connection after few minutes of
> > normal
> > > > > > operation.
> > > > > > The time of normal operation can be 5 minutes, but sometimes up
> to
> > 40
> > > > > > minutes.
> > > > > >
> > > > > > Please help me to track the root cause and fix it
> > > > > >
> > > > > > Host OS - Centos 7.5
> > > > > > virtualisation - KVM
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Daan
> > > > >
> > > >
> > >
> >
>


Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Makrand
I think you must have tried to register system VM template from template
menu from the left side manually. It will be registered as USER only in
that case.

--
Makrand


On Fri, Jul 20, 2018 at 5:41 PM, Jevgeni Zolotarjov 
wrote:

> Eventually I fixed the problem, without clear understanding of the root
> cause.
>
> I destroyed routerVM, and cloudstack recreated it. But I discovered, that
> it is version 4.11.0. Not 4.11.1!
> I checked and systemVM template for 4.11.1 is registered in cloudstack -
> all OK.
> I noticed however, that its "Type" property is USER, not SYSTEM.
>
> Then I wanted to delete template for 4.11.0, but cloudstack does not offer
> me this option. I can only add new ones.
>
> So, I ended with manipulations in the DB itself to make template for 4.11.1
> the only one in the system and have Type = SYSTEM.
>
> After that I destroyed again routerVM. It was recreated and it is 4.11.1.
> And now everything works fine for over an hour already.
>
> I hope, thats it.
>
> On Fri, Jul 20, 2018 at 12:10 PM ilya musayev <
> ilya.mailing.li...@gmail.com>
> wrote:
>
> > Have you tried destroying router vm and let CloudStack create new one ?
> >
> > On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov <
> j.zolotar...@gmail.com
> > >
> > wrote:
> >
> > > - an ip-address conflict.
> > >   JZ: unlikely, but not impossible. I tried to restart router VM in
> > > Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> > > While rebooting ping to this router VM disappeared. Hence, no other
> > device
> > > is using the same IP.
> > > But!!! when this virtual router started, then network connection to all
> > > guest VMs disappeared. So, it must be something with this virtual
> router.
> > >
> > > - flakey hardware being one of
> > > -+ if card in the host
> > > JZ: higly unlikely
> > >
> > > -+ a router with bad firmware
> > > JZ: also unlikely
> > >
> > > - of course a strange cofiguration of the software router in you host
> > might
> > > be the issue as well
> > > JZ: I didnt do any special configuration. Just used default.
> > >
> > > by all I know this happening after upgrade sounds like an unhappy
> > incident
> > > but can't be sure.
> > > The iptables restart, was this on the VirtualRouter or on the host, or
> > > maybe on the guest? and the restart network?
> > >
> > > JZ: iptables restart on host machine. (or network restart on host)
> > >
> > >
> > >
> > > On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland <
> daan.hoogl...@gmail.com>
> > > wrote:
> > >
> > > > that behaviour sound familiar from a couple of cases:
> > > > - an ip-address conflict.
> > > > - flakey hardware being one of
> > > > -+ if card in the host
> > > > -+ a router with bad firmware
> > > > - of course a strange cofiguration of the software router in you host
> > > might
> > > > be the issue as well
> > > >
> > > > by all I know this happening after upgrade sounds like an unhappy
> > > incident
> > > > but can't be sure.
> > > > The iptables restart, was this on the VirtualRouter or on the host,
> or
> > > > maybe on the guest? and the restart network?
> > > >
> > > >
> > > > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > > > j.zolotar...@gmail.com>
> > > > wrote:
> > > >
> > > > > I updated cloudstack 4.11.0 -> 4.11.1
> > > > >
> > > > > Everything went OK during update, but after host reboot guest VMs
> > lost
> > > > > connection after few minutes of normal work.
> > > > > I tried restarting network - systemctl restart network.service
> > > > > then connection was restored again for few minutes
> > > > >
> > > > > Finally I could restore connection by restarting iptables -
> systemctl
> > > > > restart iptables.service
> > > > >
> > > > > But then again guest VMs lost connection after few minutes of
> normal
> > > > > operation.
> > > > > The time of normal operation can be 5 minutes, but sometimes up to
> 40
> > > > > minutes.
> > > > >
> > > > > Please help me to track the root cause and fix it
> > > > >
> > > > > Host OS - Centos 7.5
> > > > > virtualisation - KVM
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Daan
> > > >
> > >
> >
>


Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Jevgeni Zolotarjov
Eventually I fixed the problem, without clear understanding of the root
cause.

I destroyed routerVM, and cloudstack recreated it. But I discovered, that
it is version 4.11.0. Not 4.11.1!
I checked and systemVM template for 4.11.1 is registered in cloudstack -
all OK.
I noticed however, that its "Type" property is USER, not SYSTEM.

Then I wanted to delete template for 4.11.0, but cloudstack does not offer
me this option. I can only add new ones.

So, I ended with manipulations in the DB itself to make template for 4.11.1
the only one in the system and have Type = SYSTEM.

After that I destroyed again routerVM. It was recreated and it is 4.11.1.
And now everything works fine for over an hour already.

I hope, thats it.

On Fri, Jul 20, 2018 at 12:10 PM ilya musayev 
wrote:

> Have you tried destroying router vm and let CloudStack create new one ?
>
> On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov  >
> wrote:
>
> > - an ip-address conflict.
> >   JZ: unlikely, but not impossible. I tried to restart router VM in
> > Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> > While rebooting ping to this router VM disappeared. Hence, no other
> device
> > is using the same IP.
> > But!!! when this virtual router started, then network connection to all
> > guest VMs disappeared. So, it must be something with this virtual router.
> >
> > - flakey hardware being one of
> > -+ if card in the host
> > JZ: higly unlikely
> >
> > -+ a router with bad firmware
> > JZ: also unlikely
> >
> > - of course a strange cofiguration of the software router in you host
> might
> > be the issue as well
> > JZ: I didnt do any special configuration. Just used default.
> >
> > by all I know this happening after upgrade sounds like an unhappy
> incident
> > but can't be sure.
> > The iptables restart, was this on the VirtualRouter or on the host, or
> > maybe on the guest? and the restart network?
> >
> > JZ: iptables restart on host machine. (or network restart on host)
> >
> >
> >
> > On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland 
> > wrote:
> >
> > > that behaviour sound familiar from a couple of cases:
> > > - an ip-address conflict.
> > > - flakey hardware being one of
> > > -+ if card in the host
> > > -+ a router with bad firmware
> > > - of course a strange cofiguration of the software router in you host
> > might
> > > be the issue as well
> > >
> > > by all I know this happening after upgrade sounds like an unhappy
> > incident
> > > but can't be sure.
> > > The iptables restart, was this on the VirtualRouter or on the host, or
> > > maybe on the guest? and the restart network?
> > >
> > >
> > > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > > j.zolotar...@gmail.com>
> > > wrote:
> > >
> > > > I updated cloudstack 4.11.0 -> 4.11.1
> > > >
> > > > Everything went OK during update, but after host reboot guest VMs
> lost
> > > > connection after few minutes of normal work.
> > > > I tried restarting network - systemctl restart network.service
> > > > then connection was restored again for few minutes
> > > >
> > > > Finally I could restore connection by restarting iptables - systemctl
> > > > restart iptables.service
> > > >
> > > > But then again guest VMs lost connection after few minutes of normal
> > > > operation.
> > > > The time of normal operation can be 5 minutes, but sometimes up to 40
> > > > minutes.
> > > >
> > > > Please help me to track the root cause and fix it
> > > >
> > > > Host OS - Centos 7.5
> > > > virtualisation - KVM
> > > >
> > >
> > >
> > >
> > > --
> > > Daan
> > >
> >
>


Re: Storage migration and XS VHD issue

2018-07-20 Thread Alessandro Caviglione
Thank you for your suggestions, but how can I merge all the chain VHD if
the VM does not have any snapshot??

On Fri, Jul 20, 2018 at 10:20 AM Makrand  wrote:

> Hi Al,
>
> With XenServer, its always issue with moving bigger volumes. (operation
> just times out)
>
> Here are the parameters that worked for me for bigger volume migrations in
> past. (You may want to tweak as per your env. may increase a bit. )
>
> migratewait: 36000
> storage.pool.max.waitseconds: 36000
> wait:18000
>
> *wait *is most critical of all . You need to change this in global config
> and this warrants management service restart.
>
>
> For the VHD chain read -  https://support.citrix.com/article/CTX201296.
> Understand this.
>
> First, remove all the unnecessary snaps of disks etc. You need click on
> scan (from xencenter) after you remove snap etc.
>
> Good luck
>
>
> --
> Makrand
>
>
> On Fri, Jul 20, 2018 at 1:12 PM, Alessandro Caviglione <
> c.alessan...@gmail.com> wrote:
>
> > Hi guys,
> > we need to move all the VM's disk from one storage to another.
> > Due to extremely slow performance of the source primary storage, migrate
> > VMs and DATA disks from cloudstack does not work because of timeout.
> > So, the alternative, is to move VHD files directly between the storage
> and
> > change Cloudstack DB... but I see that each VM has a lng VHD chain
> and
> > CS DB point to the last one.
> > So... first question is: WHY?
> > Second question is: how can I consolidate all these files in a single
> one?
> >
> > vhd=88e504fa-08e8-4c82-abf6-1ebf8546ade2.vhd capacity=53687091200
> > size=18344002048 hidden=1 parent=none
> >vhd=ecc7b0f1-63f1-4aac-9a69-39feb0db0b6e.vhd capacity=53687091200
> > size=3526001152 hidden=1 parent=88e504fa-08e8-4c82-abf6-1ebf8546ade2.vhd
> >   vhd=73353525-ef56-49db-bfdf-b4e87f668d40.vhd capacity=53687091200
> > size=147194368 hidden=1 parent=ecc7b0f1-63f1-4aac-9a69-39feb0db0b6e.vhd
> >  vhd=3c9c9803-dbd9-4993-98eb-78f9d5a9dd5a.vhd
> capacity=53687091200
> > size=701923840 hidden=1 parent=73353525-ef56-49db-bfdf-b4e87f668d40.vhd
> > vhd=843874a3-6c2e-4e52-b3fd-f5cf6277d297.vhd
> > capacity=53687091200 size=73650688 hidden=1
> > parent=3c9c9803-dbd9-4993-98eb-78f9d5a9dd5a.vhd
> >vhd=1abe6e83-09a8-4ee0-a2b7-270b47c7a461.vhd
> > capacity=53687091200 size=1235640832 hidden=1
> > parent=843874a3-6c2e-4e52-b3fd-f5cf6277d297.vhd
> >   vhd=0cd07d92-d1d0-46f5-bc93-9f49bba72b74.vhd
> > capacity=53687091200 size=82055680 hidden=1
> > parent=1abe6e83-09a8-4ee0-a2b7-270b47c7a461.vhd
> >  vhd=f038eae3-6912-4d52-a39e-276052dee777.vhd
> > capacity=53687091200 size=914149888 hidden=1
> > parent=0cd07d92-d1d0-46f5-bc93-9f49bba72b74.vhd
> > vhd=684abd94-f483-4721-9b1c-b8f82d6b7bd7.vhd
> > capacity=53687091200 size=84156928 hidden=1
> > parent=f038eae3-6912-4d52-a39e-276052dee777.vhd
> >vhd=e9e152e4-87bf-430d-8ec6-7f7b0f45c27a.vhd
> > capacity=53687091200 size=733442560 hidden=1
> > parent=684abd94-f483-4721-9b1c-b8f82d6b7bd7.vhd
> >
>  vhd=9c01a612-99f7-4ba7-800f-c3c3b9d4b268.vhd
> > capacity=53687091200 size=241750528 hidden=1
> > parent=e9e152e4-87bf-430d-8ec6-7f7b0f45c27a.vhd
> >
> >  vhd=74fb956e-1699-481e-8cba-f2a898c5eebf.vhd capacity=53687091200
> > size=752353792 hidden=1 parent=9c01a612-99f7-4ba7-800f-c3c3b9d4b268.vhd
> >
> > vhd=bbb7c296-db11-4989-87ba-ed2e1a0d6dab.vhd capacity=53687091200
> > size=105169408 hidden=1 parent=74fb956e-1699-481e-8cba-f2a898c5eebf.vhd
> >
> >  vhd=d38c45ea-2c3d-42b8-b804-0010e8be2e52.vhd capacity=53687091200
> > size=865821184 hidden=1 parent=bbb7c296-db11-4989-87ba-ed2e1a0d6dab.vhd
> >
> > vhd=1c61f78e-b526-4383-9783-a2f54dad4c53.vhd capacity=53687091200
> > size=96764416 hidden=1 parent=d38c45ea-2c3d-42b8-b804-0010e8be2e52.vhd
> >
> >  vhd=21d75884-4976-400c-92b3-eed01dcbfde5.vhd capacity=53687091200
> > size=1204122112 hidden=1 parent=1c61f78e-b526-4383-9783-a2f54dad4c53.vhd
> >
> > vhd=d9581bd3-c93d-4938-9e59-740fbaff8fb2.vhd capacity=53687091200
> > size=96764416 hidden=1 parent=21d75884-4976-400c-92b3-eed01dcbfde5.vhd
> >
> >  vhd=ba3a708b-932c-4024-9051-9fe30694b03b.vhd capacity=53687091200
> > size=912048640 hidden=1 parent=d9581bd3-c93d-4938-9e59-740fbaff8fb2.vhd
> >
> > vhd=6c75cb97-6f64-4699-a0d4-93f906401e72.vhd capacity=53687091200
> > size=98865664 hidden=1 parent=ba3a708b-932c-4024-9051-9fe30694b03b.vhd
> >
> >  vhd=41960a95-78f6-4e4f-9ec3-2c0d34e0b33f.vhd capacity=53687091200
> > size=815391232 hidden=1 parent=6c75cb97-6f64-4699-a0d4-93f906401e72.vhd
> >
> > vhd=e32e9562-6873-490c-bc03-ecc98dfc95a1.vhd capacity=53687091200
> > size=79954432 hidden=1 parent=41960a95-78f6-4e4f-9ec3-2c0d34e0b33f.vhd
> >
> >  vhd=dd160ab4-ff9a-4da5-a453-a55426393b04.vhd capacity=53687091200
> > size=836403712 hidden=1 parent=e32e9562-6873-490c-bc03-ecc98dfc95a1.vhd
> >
> > vhd=15f9de9c-2da7-469d-9b86-1ccf79064b09.vhd capacity=53687091200
> > size=82055680 

Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Andrija Panic
using vxlan as isolation method for advance network ?

On Fri, 20 Jul 2018 at 11:29, Daan Hoogland  wrote:

> On Fri, Jul 20, 2018 at 9:09 AM, ilya musayev <
> ilya.mailing.li...@gmail.com>
> wrote:
>
> > Have you tried destroying router vm and let CloudStack create new one ?
> >
> ​yes, or restart network with cleanup
>
> ​
>
>
> >
> > On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov <
> j.zolotar...@gmail.com
> > >
> > wrote:
> >
> > > - an ip-address conflict.
> > >   JZ: unlikely, but not impossible. I tried to restart router VM in
> > > Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> > > While rebooting ping to this router VM disappeared. Hence, no other
> > device
> > > is using the same IP.
> > > But!!! when this virtual router started, then network connection to all
> > > guest VMs disappeared. So, it must be something with this virtual
> router.
> > >
> > > - flakey hardware being one of
> > > -+ if card in the host
> > > JZ: higly unlikely
> > >
> > > -+ a router with bad firmware
> > > JZ: also unlikely
> > >
> > > - of course a strange cofiguration of the software router in you host
> > might
> > > be the issue as well
> > > JZ: I didnt do any special configuration. Just used default.
> > >
> > > by all I know this happening after upgrade sounds like an unhappy
> > incident
> > > but can't be sure.
> > > The iptables restart, was this on the VirtualRouter or on the host, or
> > > maybe on the guest? and the restart network?
> > >
> > > JZ: iptables restart on host machine. (or network restart on host)
> > >
> > >
> > >
> > > On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland <
> daan.hoogl...@gmail.com>
> > > wrote:
> > >
> > > > that behaviour sound familiar from a couple of cases:
> > > > - an ip-address conflict.
> > > > - flakey hardware being one of
> > > > -+ if card in the host
> > > > -+ a router with bad firmware
> > > > - of course a strange cofiguration of the software router in you host
> > > might
> > > > be the issue as well
> > > >
> > > > by all I know this happening after upgrade sounds like an unhappy
> > > incident
> > > > but can't be sure.
> > > > The iptables restart, was this on the VirtualRouter or on the host,
> or
> > > > maybe on the guest? and the restart network?
> > > >
> > > >
> > > > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > > > j.zolotar...@gmail.com>
> > > > wrote:
> > > >
> > > > > I updated cloudstack 4.11.0 -> 4.11.1
> > > > >
> > > > > Everything went OK during update, but after host reboot guest VMs
> > lost
> > > > > connection after few minutes of normal work.
> > > > > I tried restarting network - systemctl restart network.service
> > > > > then connection was restored again for few minutes
> > > > >
> > > > > Finally I could restore connection by restarting iptables -
> systemctl
> > > > > restart iptables.service
> > > > >
> > > > > But then again guest VMs lost connection after few minutes of
> normal
> > > > > operation.
> > > > > The time of normal operation can be 5 minutes, but sometimes up to
> 40
> > > > > minutes.
> > > > >
> > > > > Please help me to track the root cause and fix it
> > > > >
> > > > > Host OS - Centos 7.5
> > > > > virtualisation - KVM
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Daan
> > > >
> > >
> >
>
>
>
> --
> Daan
>


-- 

Andrija Panić


Re: Github Issues

2018-07-20 Thread Dingane Hlaluku
+1 as a new developer for this community, I find Github much easier for me to 
create/track both issues and PRs.




From: Will Stevens 
Sent: Thursday, July 19, 2018 3:35:00 AM
To: d...@cloudstack.apache.org
Cc: users
Subject: Re: Github Issues

Github is the platform that is most comfortable for most users and
developers to collaborate.  Everyone knows it, regardless of their
background, so it opens our community to a wider group of people. Those are
my thoughts anyway...

Will

On Wed, Jul 18, 2018, 2:10 PM Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> There is something else that might be worth mentioning.  Since we moved to
> Github, it seems that the project is attracting more people. I mean, it
> seems that there are new players coming and reporting issues and opening
> PRs.
>
> I might be totally mistaken though.
>
>
> On Wed, Jul 18, 2018 at 3:07 PM, Will Stevens 
> wrote:
>
> > +1 to access to better automation and integration.
> >
> > On Wed, Jul 18, 2018, 12:16 PM Rene Moser  wrote:
> >
> > > Hi
> > >
> > > On 07/17/2018 02:01 PM, Marc-Aurèle Brothier wrote:
> > > > Hi Paul,
> > > >
> > > > My 2 cents on the topic.
> > > >
> > > > people are commenting on issues when it should by the PR and
> vice-versa
> > > >>
> > > >
> > > > I think this is simply due to the fact that with one login you can do
> > > both,
> > > > versus before you had to have a JIRA login which people might have
> > tried
> > > to
> > > > avoid, preferring using github directly, ensuring the conversation
> will
> > > > only be on the PR. Most of the issues in Jira didn't have any
> > > conversation
> > > > at all.
> > > >
> > > > But I do feel also the pain of searching the issues on github as it's
> > > more
> > > > free-hand than a jira system. At the same time it's easier and
> quicker
> > to
> > > > navigate, so it ease the pain at the same time ;-)
> > > > I would say that the current labels isn't well organized to be able
> to
> > > > search like in jira but it could. For example any label has a prefix
> > > > describing the jira attribute type (component, version, ...) Then a
> bot
> > > > scanning the issue content could set some of them as other open
> source
> > > > project are doing. The bad thing here is that you might end up with
> too
> > > > many labels. Maybe @resmo can give his point of view on how things
> are
> > > > managed in Ansible (https://github.com/ansible/ansible/pulls - lots
> of
> > > > labels, lots of issues and PRs). I don't know if that's a solution
> but
> > > > labels seem the only way to organize things.
> > >
> > > Personally, I don't care much if jira or github issues. Github issues
> > > worked pretty well for me so far.
> > >
> > > However, We don't use all the things that make the work easier with
> > > github issues. I assume we invested much more efforts in making "jira"
> > > the way we wanted, now we assume that github just works?
> > >
> > > The benefit about github issues is, that it has an extensive api which
> > > let you automate. There are many helpful tools making our life easier.
> > >
> > > Let a bot do the issue labeling, workflowing, and user guiding and even
> > > merging PR after ci passed when 2 comments have LGTM.
> > >
> > > Look at https://github.com/kubernetes/kubernetes e.g.
> > >
> > > Short: If we want to automate things and evolve, github may be the
> > > better platform, if we want to keep things manual, then jira is
> probably
> > > more suitable.
> > >
> > > Regards
> > > René
> > >
> > >
> > >
> > >
> > >
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>

dingane.hlal...@shapeblue.com 
www.shapeblue.com
,   
@shapeblue
  
 



Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Daan Hoogland
On Fri, Jul 20, 2018 at 9:09 AM, ilya musayev 
wrote:

> Have you tried destroying router vm and let CloudStack create new one ?
>
​yes, or restart network with cleanup

​


>
> On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov  >
> wrote:
>
> > - an ip-address conflict.
> >   JZ: unlikely, but not impossible. I tried to restart router VM in
> > Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> > While rebooting ping to this router VM disappeared. Hence, no other
> device
> > is using the same IP.
> > But!!! when this virtual router started, then network connection to all
> > guest VMs disappeared. So, it must be something with this virtual router.
> >
> > - flakey hardware being one of
> > -+ if card in the host
> > JZ: higly unlikely
> >
> > -+ a router with bad firmware
> > JZ: also unlikely
> >
> > - of course a strange cofiguration of the software router in you host
> might
> > be the issue as well
> > JZ: I didnt do any special configuration. Just used default.
> >
> > by all I know this happening after upgrade sounds like an unhappy
> incident
> > but can't be sure.
> > The iptables restart, was this on the VirtualRouter or on the host, or
> > maybe on the guest? and the restart network?
> >
> > JZ: iptables restart on host machine. (or network restart on host)
> >
> >
> >
> > On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland 
> > wrote:
> >
> > > that behaviour sound familiar from a couple of cases:
> > > - an ip-address conflict.
> > > - flakey hardware being one of
> > > -+ if card in the host
> > > -+ a router with bad firmware
> > > - of course a strange cofiguration of the software router in you host
> > might
> > > be the issue as well
> > >
> > > by all I know this happening after upgrade sounds like an unhappy
> > incident
> > > but can't be sure.
> > > The iptables restart, was this on the VirtualRouter or on the host, or
> > > maybe on the guest? and the restart network?
> > >
> > >
> > > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > > j.zolotar...@gmail.com>
> > > wrote:
> > >
> > > > I updated cloudstack 4.11.0 -> 4.11.1
> > > >
> > > > Everything went OK during update, but after host reboot guest VMs
> lost
> > > > connection after few minutes of normal work.
> > > > I tried restarting network - systemctl restart network.service
> > > > then connection was restored again for few minutes
> > > >
> > > > Finally I could restore connection by restarting iptables - systemctl
> > > > restart iptables.service
> > > >
> > > > But then again guest VMs lost connection after few minutes of normal
> > > > operation.
> > > > The time of normal operation can be 5 minutes, but sometimes up to 40
> > > > minutes.
> > > >
> > > > Please help me to track the root cause and fix it
> > > >
> > > > Host OS - Centos 7.5
> > > > virtualisation - KVM
> > > >
> > >
> > >
> > >
> > > --
> > > Daan
> > >
> >
>



-- 
Daan


Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread ilya musayev
Have you tried destroying router vm and let CloudStack create new one ?

On Fri, Jul 20, 2018 at 1:33 AM Jevgeni Zolotarjov 
wrote:

> - an ip-address conflict.
>   JZ: unlikely, but not impossible. I tried to restart router VM in
> Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
> While rebooting ping to this router VM disappeared. Hence, no other device
> is using the same IP.
> But!!! when this virtual router started, then network connection to all
> guest VMs disappeared. So, it must be something with this virtual router.
>
> - flakey hardware being one of
> -+ if card in the host
> JZ: higly unlikely
>
> -+ a router with bad firmware
> JZ: also unlikely
>
> - of course a strange cofiguration of the software router in you host might
> be the issue as well
> JZ: I didnt do any special configuration. Just used default.
>
> by all I know this happening after upgrade sounds like an unhappy incident
> but can't be sure.
> The iptables restart, was this on the VirtualRouter or on the host, or
> maybe on the guest? and the restart network?
>
> JZ: iptables restart on host machine. (or network restart on host)
>
>
>
> On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland 
> wrote:
>
> > that behaviour sound familiar from a couple of cases:
> > - an ip-address conflict.
> > - flakey hardware being one of
> > -+ if card in the host
> > -+ a router with bad firmware
> > - of course a strange cofiguration of the software router in you host
> might
> > be the issue as well
> >
> > by all I know this happening after upgrade sounds like an unhappy
> incident
> > but can't be sure.
> > The iptables restart, was this on the VirtualRouter or on the host, or
> > maybe on the guest? and the restart network?
> >
> >
> > On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> > j.zolotar...@gmail.com>
> > wrote:
> >
> > > I updated cloudstack 4.11.0 -> 4.11.1
> > >
> > > Everything went OK during update, but after host reboot guest VMs lost
> > > connection after few minutes of normal work.
> > > I tried restarting network - systemctl restart network.service
> > > then connection was restored again for few minutes
> > >
> > > Finally I could restore connection by restarting iptables - systemctl
> > > restart iptables.service
> > >
> > > But then again guest VMs lost connection after few minutes of normal
> > > operation.
> > > The time of normal operation can be 5 minutes, but sometimes up to 40
> > > minutes.
> > >
> > > Please help me to track the root cause and fix it
> > >
> > > Host OS - Centos 7.5
> > > virtualisation - KVM
> > >
> >
> >
> >
> > --
> > Daan
> >
>


Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Jevgeni Zolotarjov
- an ip-address conflict.
  JZ: unlikely, but not impossible. I tried to restart router VM in
Network-Guest networks -> defaultGuestNetwork -> VirtualAppliances
While rebooting ping to this router VM disappeared. Hence, no other device
is using the same IP.
But!!! when this virtual router started, then network connection to all
guest VMs disappeared. So, it must be something with this virtual router.

- flakey hardware being one of
-+ if card in the host
JZ: higly unlikely

-+ a router with bad firmware
JZ: also unlikely

- of course a strange cofiguration of the software router in you host might
be the issue as well
JZ: I didnt do any special configuration. Just used default.

by all I know this happening after upgrade sounds like an unhappy incident
but can't be sure.
The iptables restart, was this on the VirtualRouter or on the host, or
maybe on the guest? and the restart network?

JZ: iptables restart on host machine. (or network restart on host)



On Fri, Jul 20, 2018 at 11:14 AM Daan Hoogland 
wrote:

> that behaviour sound familiar from a couple of cases:
> - an ip-address conflict.
> - flakey hardware being one of
> -+ if card in the host
> -+ a router with bad firmware
> - of course a strange cofiguration of the software router in you host might
> be the issue as well
>
> by all I know this happening after upgrade sounds like an unhappy incident
> but can't be sure.
> The iptables restart, was this on the VirtualRouter or on the host, or
> maybe on the guest? and the restart network?
>
>
> On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov <
> j.zolotar...@gmail.com>
> wrote:
>
> > I updated cloudstack 4.11.0 -> 4.11.1
> >
> > Everything went OK during update, but after host reboot guest VMs lost
> > connection after few minutes of normal work.
> > I tried restarting network - systemctl restart network.service
> > then connection was restored again for few minutes
> >
> > Finally I could restore connection by restarting iptables - systemctl
> > restart iptables.service
> >
> > But then again guest VMs lost connection after few minutes of normal
> > operation.
> > The time of normal operation can be 5 minutes, but sometimes up to 40
> > minutes.
> >
> > Please help me to track the root cause and fix it
> >
> > Host OS - Centos 7.5
> > virtualisation - KVM
> >
>
>
>
> --
> Daan
>


Re: Storage migration and XS VHD issue

2018-07-20 Thread Makrand
Hi Al,

With XenServer, its always issue with moving bigger volumes. (operation
just times out)

Here are the parameters that worked for me for bigger volume migrations in
past. (You may want to tweak as per your env. may increase a bit. )

migratewait: 36000
storage.pool.max.waitseconds: 36000
wait:18000

*wait *is most critical of all . You need to change this in global config
and this warrants management service restart.


For the VHD chain read -  https://support.citrix.com/article/CTX201296.
Understand this.

First, remove all the unnecessary snaps of disks etc. You need click on
scan (from xencenter) after you remove snap etc.

Good luck


--
Makrand


On Fri, Jul 20, 2018 at 1:12 PM, Alessandro Caviglione <
c.alessan...@gmail.com> wrote:

> Hi guys,
> we need to move all the VM's disk from one storage to another.
> Due to extremely slow performance of the source primary storage, migrate
> VMs and DATA disks from cloudstack does not work because of timeout.
> So, the alternative, is to move VHD files directly between the storage and
> change Cloudstack DB... but I see that each VM has a lng VHD chain and
> CS DB point to the last one.
> So... first question is: WHY?
> Second question is: how can I consolidate all these files in a single one?
>
> vhd=88e504fa-08e8-4c82-abf6-1ebf8546ade2.vhd capacity=53687091200
> size=18344002048 hidden=1 parent=none
>vhd=ecc7b0f1-63f1-4aac-9a69-39feb0db0b6e.vhd capacity=53687091200
> size=3526001152 hidden=1 parent=88e504fa-08e8-4c82-abf6-1ebf8546ade2.vhd
>   vhd=73353525-ef56-49db-bfdf-b4e87f668d40.vhd capacity=53687091200
> size=147194368 hidden=1 parent=ecc7b0f1-63f1-4aac-9a69-39feb0db0b6e.vhd
>  vhd=3c9c9803-dbd9-4993-98eb-78f9d5a9dd5a.vhd capacity=53687091200
> size=701923840 hidden=1 parent=73353525-ef56-49db-bfdf-b4e87f668d40.vhd
> vhd=843874a3-6c2e-4e52-b3fd-f5cf6277d297.vhd
> capacity=53687091200 size=73650688 hidden=1
> parent=3c9c9803-dbd9-4993-98eb-78f9d5a9dd5a.vhd
>vhd=1abe6e83-09a8-4ee0-a2b7-270b47c7a461.vhd
> capacity=53687091200 size=1235640832 hidden=1
> parent=843874a3-6c2e-4e52-b3fd-f5cf6277d297.vhd
>   vhd=0cd07d92-d1d0-46f5-bc93-9f49bba72b74.vhd
> capacity=53687091200 size=82055680 hidden=1
> parent=1abe6e83-09a8-4ee0-a2b7-270b47c7a461.vhd
>  vhd=f038eae3-6912-4d52-a39e-276052dee777.vhd
> capacity=53687091200 size=914149888 hidden=1
> parent=0cd07d92-d1d0-46f5-bc93-9f49bba72b74.vhd
> vhd=684abd94-f483-4721-9b1c-b8f82d6b7bd7.vhd
> capacity=53687091200 size=84156928 hidden=1
> parent=f038eae3-6912-4d52-a39e-276052dee777.vhd
>vhd=e9e152e4-87bf-430d-8ec6-7f7b0f45c27a.vhd
> capacity=53687091200 size=733442560 hidden=1
> parent=684abd94-f483-4721-9b1c-b8f82d6b7bd7.vhd
>   vhd=9c01a612-99f7-4ba7-800f-c3c3b9d4b268.vhd
> capacity=53687091200 size=241750528 hidden=1
> parent=e9e152e4-87bf-430d-8ec6-7f7b0f45c27a.vhd
>
>  vhd=74fb956e-1699-481e-8cba-f2a898c5eebf.vhd capacity=53687091200
> size=752353792 hidden=1 parent=9c01a612-99f7-4ba7-800f-c3c3b9d4b268.vhd
>
> vhd=bbb7c296-db11-4989-87ba-ed2e1a0d6dab.vhd capacity=53687091200
> size=105169408 hidden=1 parent=74fb956e-1699-481e-8cba-f2a898c5eebf.vhd
>
>  vhd=d38c45ea-2c3d-42b8-b804-0010e8be2e52.vhd capacity=53687091200
> size=865821184 hidden=1 parent=bbb7c296-db11-4989-87ba-ed2e1a0d6dab.vhd
>
> vhd=1c61f78e-b526-4383-9783-a2f54dad4c53.vhd capacity=53687091200
> size=96764416 hidden=1 parent=d38c45ea-2c3d-42b8-b804-0010e8be2e52.vhd
>
>  vhd=21d75884-4976-400c-92b3-eed01dcbfde5.vhd capacity=53687091200
> size=1204122112 hidden=1 parent=1c61f78e-b526-4383-9783-a2f54dad4c53.vhd
>
> vhd=d9581bd3-c93d-4938-9e59-740fbaff8fb2.vhd capacity=53687091200
> size=96764416 hidden=1 parent=21d75884-4976-400c-92b3-eed01dcbfde5.vhd
>
>  vhd=ba3a708b-932c-4024-9051-9fe30694b03b.vhd capacity=53687091200
> size=912048640 hidden=1 parent=d9581bd3-c93d-4938-9e59-740fbaff8fb2.vhd
>
> vhd=6c75cb97-6f64-4699-a0d4-93f906401e72.vhd capacity=53687091200
> size=98865664 hidden=1 parent=ba3a708b-932c-4024-9051-9fe30694b03b.vhd
>
>  vhd=41960a95-78f6-4e4f-9ec3-2c0d34e0b33f.vhd capacity=53687091200
> size=815391232 hidden=1 parent=6c75cb97-6f64-4699-a0d4-93f906401e72.vhd
>
> vhd=e32e9562-6873-490c-bc03-ecc98dfc95a1.vhd capacity=53687091200
> size=79954432 hidden=1 parent=41960a95-78f6-4e4f-9ec3-2c0d34e0b33f.vhd
>
>  vhd=dd160ab4-ff9a-4da5-a453-a55426393b04.vhd capacity=53687091200
> size=836403712 hidden=1 parent=e32e9562-6873-490c-bc03-ecc98dfc95a1.vhd
>
> vhd=15f9de9c-2da7-469d-9b86-1ccf79064b09.vhd capacity=53687091200
> size=82055680 hidden=1 parent=dd160ab4-ff9a-4da5-a453-a55426393b04.vhd
>
>  vhd=6e87a6c6-9e30-4202-b301-5846d6464241.vhd capacity=53687091200
> size=899441152 hidden=1 parent=15f9de9c-2da7-469d-9b86-1ccf79064b09.vhd
>
> vhd=0743f4d0-cf96-4a5d-97d7-3f5451482eb0.vhd capacity=53687091200
> size=90460672 hidden=1 parent=6e87a6c6-9e30-4202-b301-5846d6464241.vhd

Re: 4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Daan Hoogland
that behaviour sound familiar from a couple of cases:
- an ip-address conflict.
- flakey hardware being one of
-+ if card in the host
-+ a router with bad firmware
- of course a strange cofiguration of the software router in you host might
be the issue as well

by all I know this happening after upgrade sounds like an unhappy incident
but can't be sure.
The iptables restart, was this on the VirtualRouter or on the host, or
maybe on the guest? and the restart network?


On Fri, Jul 20, 2018 at 7:43 AM, Jevgeni Zolotarjov 
wrote:

> I updated cloudstack 4.11.0 -> 4.11.1
>
> Everything went OK during update, but after host reboot guest VMs lost
> connection after few minutes of normal work.
> I tried restarting network - systemctl restart network.service
> then connection was restored again for few minutes
>
> Finally I could restore connection by restarting iptables - systemctl
> restart iptables.service
>
> But then again guest VMs lost connection after few minutes of normal
> operation.
> The time of normal operation can be 5 minutes, but sometimes up to 40
> minutes.
>
> Please help me to track the root cause and fix it
>
> Host OS - Centos 7.5
> virtualisation - KVM
>



-- 
Daan


4.11.0 -> 4.11.1 problem: Guest VMs losing connection after few minutes

2018-07-20 Thread Jevgeni Zolotarjov
I updated cloudstack 4.11.0 -> 4.11.1

Everything went OK during update, but after host reboot guest VMs lost
connection after few minutes of normal work.
I tried restarting network - systemctl restart network.service
then connection was restored again for few minutes

Finally I could restore connection by restarting iptables - systemctl
restart iptables.service

But then again guest VMs lost connection after few minutes of normal
operation.
The time of normal operation can be 5 minutes, but sometimes up to 40
minutes.

Please help me to track the root cause and fix it

Host OS - Centos 7.5
virtualisation - KVM


Storage migration and XS VHD issue

2018-07-20 Thread Alessandro Caviglione
Hi guys,
we need to move all the VM's disk from one storage to another.
Due to extremely slow performance of the source primary storage, migrate
VMs and DATA disks from cloudstack does not work because of timeout.
So, the alternative, is to move VHD files directly between the storage and
change Cloudstack DB... but I see that each VM has a lng VHD chain and
CS DB point to the last one.
So... first question is: WHY?
Second question is: how can I consolidate all these files in a single one?

vhd=88e504fa-08e8-4c82-abf6-1ebf8546ade2.vhd capacity=53687091200
size=18344002048 hidden=1 parent=none
   vhd=ecc7b0f1-63f1-4aac-9a69-39feb0db0b6e.vhd capacity=53687091200
size=3526001152 hidden=1 parent=88e504fa-08e8-4c82-abf6-1ebf8546ade2.vhd
  vhd=73353525-ef56-49db-bfdf-b4e87f668d40.vhd capacity=53687091200
size=147194368 hidden=1 parent=ecc7b0f1-63f1-4aac-9a69-39feb0db0b6e.vhd
 vhd=3c9c9803-dbd9-4993-98eb-78f9d5a9dd5a.vhd capacity=53687091200
size=701923840 hidden=1 parent=73353525-ef56-49db-bfdf-b4e87f668d40.vhd
vhd=843874a3-6c2e-4e52-b3fd-f5cf6277d297.vhd
capacity=53687091200 size=73650688 hidden=1
parent=3c9c9803-dbd9-4993-98eb-78f9d5a9dd5a.vhd
   vhd=1abe6e83-09a8-4ee0-a2b7-270b47c7a461.vhd
capacity=53687091200 size=1235640832 hidden=1
parent=843874a3-6c2e-4e52-b3fd-f5cf6277d297.vhd
  vhd=0cd07d92-d1d0-46f5-bc93-9f49bba72b74.vhd
capacity=53687091200 size=82055680 hidden=1
parent=1abe6e83-09a8-4ee0-a2b7-270b47c7a461.vhd
 vhd=f038eae3-6912-4d52-a39e-276052dee777.vhd
capacity=53687091200 size=914149888 hidden=1
parent=0cd07d92-d1d0-46f5-bc93-9f49bba72b74.vhd
vhd=684abd94-f483-4721-9b1c-b8f82d6b7bd7.vhd
capacity=53687091200 size=84156928 hidden=1
parent=f038eae3-6912-4d52-a39e-276052dee777.vhd
   vhd=e9e152e4-87bf-430d-8ec6-7f7b0f45c27a.vhd
capacity=53687091200 size=733442560 hidden=1
parent=684abd94-f483-4721-9b1c-b8f82d6b7bd7.vhd
  vhd=9c01a612-99f7-4ba7-800f-c3c3b9d4b268.vhd
capacity=53687091200 size=241750528 hidden=1
parent=e9e152e4-87bf-430d-8ec6-7f7b0f45c27a.vhd

 vhd=74fb956e-1699-481e-8cba-f2a898c5eebf.vhd capacity=53687091200
size=752353792 hidden=1 parent=9c01a612-99f7-4ba7-800f-c3c3b9d4b268.vhd

vhd=bbb7c296-db11-4989-87ba-ed2e1a0d6dab.vhd capacity=53687091200
size=105169408 hidden=1 parent=74fb956e-1699-481e-8cba-f2a898c5eebf.vhd

 vhd=d38c45ea-2c3d-42b8-b804-0010e8be2e52.vhd capacity=53687091200
size=865821184 hidden=1 parent=bbb7c296-db11-4989-87ba-ed2e1a0d6dab.vhd

vhd=1c61f78e-b526-4383-9783-a2f54dad4c53.vhd capacity=53687091200
size=96764416 hidden=1 parent=d38c45ea-2c3d-42b8-b804-0010e8be2e52.vhd

 vhd=21d75884-4976-400c-92b3-eed01dcbfde5.vhd capacity=53687091200
size=1204122112 hidden=1 parent=1c61f78e-b526-4383-9783-a2f54dad4c53.vhd

vhd=d9581bd3-c93d-4938-9e59-740fbaff8fb2.vhd capacity=53687091200
size=96764416 hidden=1 parent=21d75884-4976-400c-92b3-eed01dcbfde5.vhd

 vhd=ba3a708b-932c-4024-9051-9fe30694b03b.vhd capacity=53687091200
size=912048640 hidden=1 parent=d9581bd3-c93d-4938-9e59-740fbaff8fb2.vhd

vhd=6c75cb97-6f64-4699-a0d4-93f906401e72.vhd capacity=53687091200
size=98865664 hidden=1 parent=ba3a708b-932c-4024-9051-9fe30694b03b.vhd

 vhd=41960a95-78f6-4e4f-9ec3-2c0d34e0b33f.vhd capacity=53687091200
size=815391232 hidden=1 parent=6c75cb97-6f64-4699-a0d4-93f906401e72.vhd

vhd=e32e9562-6873-490c-bc03-ecc98dfc95a1.vhd capacity=53687091200
size=79954432 hidden=1 parent=41960a95-78f6-4e4f-9ec3-2c0d34e0b33f.vhd

 vhd=dd160ab4-ff9a-4da5-a453-a55426393b04.vhd capacity=53687091200
size=836403712 hidden=1 parent=e32e9562-6873-490c-bc03-ecc98dfc95a1.vhd

vhd=15f9de9c-2da7-469d-9b86-1ccf79064b09.vhd capacity=53687091200
size=82055680 hidden=1 parent=dd160ab4-ff9a-4da5-a453-a55426393b04.vhd

 vhd=6e87a6c6-9e30-4202-b301-5846d6464241.vhd capacity=53687091200
size=899441152 hidden=1 parent=15f9de9c-2da7-469d-9b86-1ccf79064b09.vhd

vhd=0743f4d0-cf96-4a5d-97d7-3f5451482eb0.vhd capacity=53687091200
size=90460672 hidden=1 parent=6e87a6c6-9e30-4202-b301-5846d6464241.vhd

 vhd=09ad513b-2d1d-4878-bd30-12064a989b02.vhd capacity=53687091200
size=823796224 hidden=1 parent=0743f4d0-cf96-4a5d-97d7-3f5451482eb0.vhd

  vhd=787efd1f-cad7-444c-a30d-8edcfcd2dbc6.vhd capacity=53687091200
size=105169408 hidden=1 parent=09ad513b-2d1d-4878-bd30-12064a989b02.vhd

 vhd=1e9c643f-551a-425b-abba-e10d03ed7ca8.vhd capacity=53687091200
size=867922432 hidden=1 parent=787efd1f-cad7-444c-a30d-8edcfcd2dbc6.vhd

vhd=ffbdd502-4517-484a-b1d3-891e451145b0.vhd capacity=53687091200
size=979288576 hidden=1 parent=1e9c643f-551a-425b-abba-e10d03ed7ca8.vhd

   vhd=5d5a3373-886b-427c-98bf-01f700cbd861.vhd
capacity=53687091200 size=1868116480 hidden=0
parent=ffbdd502-4517-484a-b1d3-891e451145b0.vhd