Re: KVM Host overprovisioning

2017-09-12 Thread Ivan Kudryavtsev
Hi, community.

I implemented quick PR for KVM cloudstack-agent with use of
additional directive host.overcommit.mem.mb at agent.properties

https://github.com/apache/cloudstack/pull/2266

I tested it in my 4.9, works nic for me. Also, during the building I found
interesting bug in Quota plugin, practically, it fails building before
11:00AM in my GMT+6 TZ, because

https://github.com/apache/cloudstack/blob/master/plugins/database/quota/src/org/apache/cloudstack/api/response/QuotaResponseBuilderImpl.java#L513

generates incorrect date for next day causing test
https://github.com/apache/cloudstack/blob/master/plugins/database/quota/test/org/apache/cloudstack/api/response/QuotaResponseBuilderImplTest.java#L221

Fortunately while I investigated 11AM happened and code was built without
problems. Since I don't use Quota functions I stopped digging further but
PR includes small refactoring of code which just removes copy-paste.


2017-09-12 14:09 GMT+07:00 Wido den Hollander :

>
> > Op 12 september 2017 om 9:05 schreef Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>:
> >
> >
> > Yes, sure.
> >
> > What I want is an ability to increase host memory rather than decrease
> it.
> >
> > So the first suggestion is to add a parameter to increase amount of
> > megabytes or gigabytes, not necessary a multiplier. Manual adding via
> > agent.properties is a good way to implement it because different hosts
> can
> > have different capabilities (depending on CPU model) and manual
> "per-host"
> > configuration is better than just cluster configuration option.
> >
>
> I would use a multiply factor, but you can implement both. A PR for this
> would be welcome!
>
> Wido
>
> > 2017-09-12 13:46 GMT+07:00 Wido den Hollander :
> >
> > >
> > > > Op 11 september 2017 om 13:04 schreef Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>:
> > > >
> > > >
> > > > Hi, Wido.
> > > >
> > > > Yes, you can. But it works not the way I expect because It cuts RAM
> from
> > > VM
> > > > by dividing it to overprovisioning factor like VM with 2GB of RAM
> with
> > > > Overprovisioning factor 2.0 will get 1GB displayed with "free"
> command.
> > > > That's why I finished the message with words that it might be I just
> > > don't
> > > > get the idea. The behaviour is the same in my prod 4.3 and new 4.9.
> > > >
> > >
> > > Hmm, ok.
> > >
> > > So for the KVM Agent you can add "host reserved mb" to the
> > > agent.properties, but you are proposing a setting where you can
> multiply
> > > the memory?
> > >
> > > Eg by 1.5 if you want to and have the Agent expose that to the MGMT
> server?
> > >
> > > Wido
> > >
> > > > 2017-09-11 18:00 GMT+07:00 Wido den Hollander :
> > > >
> > > > > Hi,
> > > > >
> > > > > > Op 10 september 2017 om 8:37 schreef Ivan Kudryavtsev <
> > > > > kudryavtsev...@bw-sw.com>:
> > > > > >
> > > > > >
> > > > > > Hello, community.
> > > > > >
> > > > > > During the last years Linux kernel got some interesting features
> like
> > > > > KSM,
> > > > > > ZSWAP, ZRAM. Hardware also steps forward and we see Intel 3d
> xpoint,
> > > > > > extremely fast SSD drives with m.2 and PCI-E interfaces.
> > > > > >
> > > > > > These facilities enable potentially interesting use of
> overcommited
> > > RAM
> > > > > for
> > > > > > hosts. According to IBM's investigations Zswap with LZ4/ZBUD
> > > increases
> > > > > > virtual RAM on 40%.
> > > > > >
> > > > > > I investigated current Apache CloudStack memory overcommitment
> > > > > capabilities
> > > > > > and they mostly affect VM's RAM by utilizing ballooning and I
> think
> > > it's
> > > > > > not what necessary to open new facilities. There are many cases
> > > which can
> > > > > > utilize ZSWAP and fast swap devices to efficiently provision
> more RAM
> > > > > than
> > > > > > presents.
> > > > > >
> > > > > > I suppose, CloudStack Agent for KVM can have configured parameter
> > > which
> > > > > > "mangles" RAM reported. From the other hand it can be done by
> > > implemented
> > > > > > host properties on server side. I tried manual increasing in host
> > > table:
> > > > > >
> > > > > > update host set ram=ram * 1.4 where id=1;
> > > > > >
> > > > > > and it seems until the next host stats update it works as
> expected. I
> > > > > think
> > > > > > this workaround is useful, but it's better to have the function
> in
> > > core
> > > > > > like standard.
> > > > > >
> > > > >
> > > > > Can't you set memory over provisioning on a cluster basis in the
> GUI? I
> > > > > thought you could.
> > > > >
> > > > > Wido
> > > > >
> > > > > > Let me know what you think about it, it might be I don't
> understand
> > > > > > something and ACS already has it in place? I also would like to
> hear
> > > your
> > > > > > thoughts on ZSWAP usage in practice.
> > > > > >
> > > > > > --
> > > > > > With best regards, Ivan Kudryavtsev
> > > > > > Bitworks Software, Ltd.
> > > > > > Cell: +7-923-414-1515
> > > > > > WWW: http://bitworks.software/ 
> > > > >
> > > >
> > > >
> > > >
> > 

Re: Question concerning Virtual Routers and problems during failover

2017-09-12 Thread Nitin Kumar Maharana
Hi Tim,

Can you please attach both VR’s cloud.log(present in VR path 
/var/log/cloud.log) as well as management server log of the failure case.
Which will help us finding out the exact cause of the failure.


Thanks,
Nitin
On 13-Sep-2017, at 12:42 AM, Tim Gipson 
mailto:tgip...@ena.com.invalid>> wrote:

Hey all,

I’ve found what I think could be a possible issue with the redundant VPC router 
pairs in Clousdstack.  The issue was first noticed when routers were failing 
over from master to backup.  When the backup router became master, everything 
continued to work properly and traffic flowed as normal.  However, when it 
failed from the new master back to the original master the virtual router 
stopped allowing traffic through any network interfaces and any failover after 
that resulted in virtual routers that were not passing traffic.

I can reproduce this behavior by doing a manual failover (logging in and 
issuing a reboot command on the router) from master to backup and then back to 
the original master.  From what I can tell, the iptables rules on the router 
are somehow modified during the failover (or a manual reboot) in such a way as 
to make them completely nonfunctional.  I did a side-by-side comparison of the 
iptables rules before and after a failover (or a manual reboot) and there are 
definite differences.  Sometimes rules are changed, sometimes they are 
duplicated, and I’ve even found that some rules are missing completely out of 
iptables.

We are running in a CentOS 7 environment and using KVM as our hypervisor.  Our 
CS version is 4.8 with standard images for the VRs.  As mentioned previously, 
our VRs are in redundant pairs for VPCs.

I’ve attached two iptables outputs, one from a working router and one from a 
broken router after failover.

Any help or direction you could provide to help me further identify why this is 
happening would be appreciated.

Thanks!

Tim Gipson






DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.


Question concerning Virtual Routers and problems during failover

2017-09-12 Thread Tim Gipson
Hey all,

I’ve found what I think could be a possible issue with the redundant VPC router 
pairs in Clousdstack.  The issue was first noticed when routers were failing 
over from master to backup.  When the backup router became master, everything 
continued to work properly and traffic flowed as normal.  However, when it 
failed from the new master back to the original master the virtual router 
stopped allowing traffic through any network interfaces and any failover after 
that resulted in virtual routers that were not passing traffic.

I can reproduce this behavior by doing a manual failover (logging in and 
issuing a reboot command on the router) from master to backup and then back to 
the original master.  From what I can tell, the iptables rules on the router 
are somehow modified during the failover (or a manual reboot) in such a way as 
to make them completely nonfunctional.  I did a side-by-side comparison of the 
iptables rules before and after a failover (or a manual reboot) and there are 
definite differences.  Sometimes rules are changed, sometimes they are 
duplicated, and I’ve even found that some rules are missing completely out of 
iptables.

We are running in a CentOS 7 environment and using KVM as our hypervisor.  Our 
CS version is 4.8 with standard images for the VRs.  As mentioned previously, 
our VRs are in redundant pairs for VPCs.

I’ve attached two iptables outputs, one from a working router and one from a 
broken router after failover.

Any help or direction you could provide to help me further identify why this is 
happening would be appreciated.

Thanks!

Tim Gipson


 

# Generated by iptables-save v1.4.14 on Tue Aug 29 21:08:17 2017
*mangle
:PREROUTING ACCEPT [445:57066]
:INPUT ACCEPT [547:62882]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [537:50055]
:POSTROUTING ACCEPT [537:50055]
:ACL_OUTBOUND_eth2 - [0:0]
:VPN_STATS_eth1 - [0:0]
-A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK --restore-mark 
--nfmask 0x --ctmask 0x
-A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK --restore-mark 
--nfmask 0x --ctmask 0x
-A PREROUTING -i eth2 -m state --state NEW -j CONNMARK --set-xmark 
0x2/0x
-A PREROUTING -s 172.16.64.0/24 ! -d 172.16.64.1/32 -i eth2 -m state --state 
NEW -j ACL_OUTBOUND_eth2
-A PREROUTING -i eth1 -m state --state NEW -j CONNMARK --set-xmark 
0x1/0x
-A FORWARD -j VPN_STATS_eth1
-A ACL_OUTBOUND_eth2 -d 224.0.0.18/32 -j ACCEPT
-A ACL_OUTBOUND_eth2 -j ACCEPT
-A ACL_OUTBOUND_eth2 -d 225.0.0.50/32 -j ACCEPT
-A VPN_STATS_eth1 -o eth1 -m mark --mark 0x525
-A VPN_STATS_eth1 -i eth1 -m mark --mark 0x524
COMMIT
# Completed on Tue Aug 29 21:08:17 2017
# Generated by iptables-save v1.4.14 on Tue Aug 29 21:08:17 2017
*filter
:INPUT DROP [36:4240]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [537:50055]
:ACL_INBOUND_eth2 - [0:0]
:NETWORK_STATS - [0:0]
:NETWORK_STATS_eth1 - [0:0]
-A INPUT -i eth0 -p tcp -m tcp --dport 10086 -j ACCEPT
-A INPUT -j NETWORK_STATS
-A INPUT -d 172.16.64.3/32 -i eth2 -p tcp -m tcp --dport 80 -m state --state 
NEW -j ACCEPT
-A INPUT -d 172.16.64.3/32 -i eth2 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -d 172.16.64.3/32 -i eth2 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i eth2 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -j NETWORK_STATS
-A INPUT -i eth2 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i eth2 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i eth2 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i eth2 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A INPUT -i eth2 -p tcp -m tcp --dport 8080 -m state --state NEW -j ACCEPT
-A INPUT -d 224.0.0.18/32 -j ACCEPT
-A INPUT -d 225.0.0.50/32 -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 3922 -m state --state NEW,ESTABLISHED -j 
ACCEPT
-A INPUT -d 224.0.0.18/32 -j ACCEPT
-A INPUT -d 225.0.0.50/32 -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 3922 -m state --state NEW,ESTABLISHED -j 
ACCEPT
-A FORWARD -j NETWORK_STATS
-A FORWARD -j NETWORK_STATS_eth1
-A FORWARD -j NETWORK_STATS
-A FORWARD -d 172.16.64.0/24 -o eth2 -j ACL_INBOUND_eth2
-A FORWARD -s 172.16.64.0/22 ! -d 172.16.64.0/22 -j ACCEPT
-A OUTPUT -j NETWORK_STATS
-A OUTPUT -j NETWORK_STATS
-A ACL_INBOUND_eth2 -d 225.0.0.50/32 -j ACCEPT
-A ACL_INBOUND_eth2 -d 224.0.0.18/32 -j ACCEPT
-A NETWORK_STATS -i eth0 -o eth2 -p tcp
-A NETWORK_STATS -i eth2 -o eth0 -p tcp
-A NETWORK_STATS ! -i eth0 -o eth2 -p tcp
-A NETWORK_STATS -i eth2 ! -o eth0 -p tcp
-A NETWORK_STATS -i eth0 -o eth2 -p tcp
-A NETWORK_STATS -i eth2 -o eth0 -p tcp
-A NETWORK_STATS ! -i eth0 -o eth2 -p tcp
-A NETWORK_STATS -i eth2 ! -o eth0 -p tcp
-A NETWORK_STATS_eth1 -d 172.16.64.0/24 -o eth1
-A NETWORK_STATS_eth1 -s 172.16.64.0/24 -o eth1
COMMIT
# Completed on Tue Aug 29 21:08:17 2017
# Generated by iptables-save v1.4.14 on Tue Aug 29 21:08:17 2017
*nat
:PREROUTING ACCEPT [70:3660]
:INPUT ACCEPT [16:1104]
:OUTPUT ACCEPT [10:641]
:POSTROUTING ACCEPT [0:0]

Release packages for 4.9.3.0

2017-09-12 Thread Rohit Yadav
Wido/PL/others,


Can you please help with building and publishing of 4.9.3.0 rpms/deb packages 
on the download.cloudstack.org repository? I've built and published the repos 
on packages.shapeblue.com now (shapeblue.com/packages for details).


Regards.


rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: KVM Host overprovisioning

2017-09-12 Thread Wido den Hollander

> Op 12 september 2017 om 9:05 schreef Ivan Kudryavtsev 
> :
> 
> 
> Yes, sure.
> 
> What I want is an ability to increase host memory rather than decrease it.
> 
> So the first suggestion is to add a parameter to increase amount of
> megabytes or gigabytes, not necessary a multiplier. Manual adding via
> agent.properties is a good way to implement it because different hosts can
> have different capabilities (depending on CPU model) and manual "per-host"
> configuration is better than just cluster configuration option.
> 

I would use a multiply factor, but you can implement both. A PR for this would 
be welcome!

Wido

> 2017-09-12 13:46 GMT+07:00 Wido den Hollander :
> 
> >
> > > Op 11 september 2017 om 13:04 schreef Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com>:
> > >
> > >
> > > Hi, Wido.
> > >
> > > Yes, you can. But it works not the way I expect because It cuts RAM from
> > VM
> > > by dividing it to overprovisioning factor like VM with 2GB of RAM with
> > > Overprovisioning factor 2.0 will get 1GB displayed with "free" command.
> > > That's why I finished the message with words that it might be I just
> > don't
> > > get the idea. The behaviour is the same in my prod 4.3 and new 4.9.
> > >
> >
> > Hmm, ok.
> >
> > So for the KVM Agent you can add "host reserved mb" to the
> > agent.properties, but you are proposing a setting where you can multiply
> > the memory?
> >
> > Eg by 1.5 if you want to and have the Agent expose that to the MGMT server?
> >
> > Wido
> >
> > > 2017-09-11 18:00 GMT+07:00 Wido den Hollander :
> > >
> > > > Hi,
> > > >
> > > > > Op 10 september 2017 om 8:37 schreef Ivan Kudryavtsev <
> > > > kudryavtsev...@bw-sw.com>:
> > > > >
> > > > >
> > > > > Hello, community.
> > > > >
> > > > > During the last years Linux kernel got some interesting features like
> > > > KSM,
> > > > > ZSWAP, ZRAM. Hardware also steps forward and we see Intel 3d xpoint,
> > > > > extremely fast SSD drives with m.2 and PCI-E interfaces.
> > > > >
> > > > > These facilities enable potentially interesting use of overcommited
> > RAM
> > > > for
> > > > > hosts. According to IBM's investigations Zswap with LZ4/ZBUD
> > increases
> > > > > virtual RAM on 40%.
> > > > >
> > > > > I investigated current Apache CloudStack memory overcommitment
> > > > capabilities
> > > > > and they mostly affect VM's RAM by utilizing ballooning and I think
> > it's
> > > > > not what necessary to open new facilities. There are many cases
> > which can
> > > > > utilize ZSWAP and fast swap devices to efficiently provision more RAM
> > > > than
> > > > > presents.
> > > > >
> > > > > I suppose, CloudStack Agent for KVM can have configured parameter
> > which
> > > > > "mangles" RAM reported. From the other hand it can be done by
> > implemented
> > > > > host properties on server side. I tried manual increasing in host
> > table:
> > > > >
> > > > > update host set ram=ram * 1.4 where id=1;
> > > > >
> > > > > and it seems until the next host stats update it works as expected. I
> > > > think
> > > > > this workaround is useful, but it's better to have the function in
> > core
> > > > > like standard.
> > > > >
> > > >
> > > > Can't you set memory over provisioning on a cluster basis in the GUI? I
> > > > thought you could.
> > > >
> > > > Wido
> > > >
> > > > > Let me know what you think about it, it might be I don't understand
> > > > > something and ACS already has it in place? I also would like to hear
> > your
> > > > > thoughts on ZSWAP usage in practice.
> > > > >
> > > > > --
> > > > > With best regards, Ivan Kudryavtsev
> > > > > Bitworks Software, Ltd.
> > > > > Cell: +7-923-414-1515
> > > > > WWW: http://bitworks.software/ 
> > > >
> > >
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks Software, Ltd.
> > > Cell: +7-923-414-1515
> > > WWW: http://bitworks.software/ 
> >
> 
> 
> 
> -- 
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 


Re: KVM Host overprovisioning

2017-09-12 Thread Ivan Kudryavtsev
Yes, sure.

What I want is an ability to increase host memory rather than decrease it.

So the first suggestion is to add a parameter to increase amount of
megabytes or gigabytes, not necessary a multiplier. Manual adding via
agent.properties is a good way to implement it because different hosts can
have different capabilities (depending on CPU model) and manual "per-host"
configuration is better than just cluster configuration option.

2017-09-12 13:46 GMT+07:00 Wido den Hollander :

>
> > Op 11 september 2017 om 13:04 schreef Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>:
> >
> >
> > Hi, Wido.
> >
> > Yes, you can. But it works not the way I expect because It cuts RAM from
> VM
> > by dividing it to overprovisioning factor like VM with 2GB of RAM with
> > Overprovisioning factor 2.0 will get 1GB displayed with "free" command.
> > That's why I finished the message with words that it might be I just
> don't
> > get the idea. The behaviour is the same in my prod 4.3 and new 4.9.
> >
>
> Hmm, ok.
>
> So for the KVM Agent you can add "host reserved mb" to the
> agent.properties, but you are proposing a setting where you can multiply
> the memory?
>
> Eg by 1.5 if you want to and have the Agent expose that to the MGMT server?
>
> Wido
>
> > 2017-09-11 18:00 GMT+07:00 Wido den Hollander :
> >
> > > Hi,
> > >
> > > > Op 10 september 2017 om 8:37 schreef Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>:
> > > >
> > > >
> > > > Hello, community.
> > > >
> > > > During the last years Linux kernel got some interesting features like
> > > KSM,
> > > > ZSWAP, ZRAM. Hardware also steps forward and we see Intel 3d xpoint,
> > > > extremely fast SSD drives with m.2 and PCI-E interfaces.
> > > >
> > > > These facilities enable potentially interesting use of overcommited
> RAM
> > > for
> > > > hosts. According to IBM's investigations Zswap with LZ4/ZBUD
> increases
> > > > virtual RAM on 40%.
> > > >
> > > > I investigated current Apache CloudStack memory overcommitment
> > > capabilities
> > > > and they mostly affect VM's RAM by utilizing ballooning and I think
> it's
> > > > not what necessary to open new facilities. There are many cases
> which can
> > > > utilize ZSWAP and fast swap devices to efficiently provision more RAM
> > > than
> > > > presents.
> > > >
> > > > I suppose, CloudStack Agent for KVM can have configured parameter
> which
> > > > "mangles" RAM reported. From the other hand it can be done by
> implemented
> > > > host properties on server side. I tried manual increasing in host
> table:
> > > >
> > > > update host set ram=ram * 1.4 where id=1;
> > > >
> > > > and it seems until the next host stats update it works as expected. I
> > > think
> > > > this workaround is useful, but it's better to have the function in
> core
> > > > like standard.
> > > >
> > >
> > > Can't you set memory over provisioning on a cluster basis in the GUI? I
> > > thought you could.
> > >
> > > Wido
> > >
> > > > Let me know what you think about it, it might be I don't understand
> > > > something and ACS already has it in place? I also would like to hear
> your
> > > > thoughts on ZSWAP usage in practice.
> > > >
> > > > --
> > > > With best regards, Ivan Kudryavtsev
> > > > Bitworks Software, Ltd.
> > > > Cell: +7-923-414-1515
> > > > WWW: http://bitworks.software/ 
> > >
> >
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks Software, Ltd.
> > Cell: +7-923-414-1515
> > WWW: http://bitworks.software/ 
>



-- 
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/