Re: New VP of CloudStack: Paul Angus

2019-03-11 Thread Rubens Malheiro
Contracts! Paul!

I'm happy for you and Cloudstack!

It's nice choice!



On 11/03/19 12:30, "Todd Pigram"  wrote:

Congratulations Paul


On Mon, Mar 11, 2019 at 11:18 AM Jochim, Ingo 
wrote:

> Congratulations Paul !!!
>
> -Original Message-
> From: Tutkowski, Mike 
> Sent: Montag, 11. März 2019 16:16
> To: users@cloudstack.apache.org; d...@cloudstack.apache.org
> Subject: New VP of CloudStack: Paul Angus
>
> Hi everyone,
>
> As you may know, the role of VP of CloudStack (Chair of the CloudStack
> PMC) has a one-year term. My term has now come and gone.
>
> I’m happy to announce that the CloudStack PMC has elected Paul Angus as
> our new VP of CloudStack.
>
> As many already know, Paul has been an active member of the CloudStack
> Community for over six years now. I’ve worked with Paul on and off
> throughout much of that time and I believe he’ll be a great fit for this
> role.
>
> Please join me in welcoming Paul as the new VP of Apache CloudStack!
>
> Thanks,
> Mike
>


-- 


Todd Pigram
http://about.me/ToddPigram
www.linkedin.com/in/toddpigram/
@pigram86 on twitter
https://plus.google.com/+ToddPigram86
Mobile - 216-224-5769

PGP Public Key




Re: CloudStack Collab in Brazil

2018-12-24 Thread Rubens Malheiro
Yeah! Go! Go! Go!

Sent from my iPhone

> On 24 Dec 2018, at 08:36, Rafael Weingärtner  
> wrote:
> 
> It would be great to have your presence :)
> My idea is to have a call after this period of Christmas and New Year’s
> Eve. I will let you guys know when I get the dates and time.
> 
> Thanks for your (the ACS community) time, attention, and effort so far.
> 
>> On Fri, Dec 21, 2018 at 10:04 PM Tim Mackey  wrote:
>> 
>> Rafael,
>> 
>> I can't do a call next week, but the following week I should be able to.
>> The tracks look great. From my side, assuming I can get travel approval,
>> I'll submit on the cloud security track. Regulations are part of my life
>> these days!
>> 
>> -tim
>> 
>> On Fri, Dec 21, 2018 at 10:19 AM Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>> 
>>> No date has been set yet. Next week I will contact them again, and then I
>>> will reach the community here to set a date. Thanks for the willingness
>> to
>>> make this happen! Your participation is essential. You guys have great
>> use
>>> cases of ACS.
>>> 
>>> On Fri, Dec 21, 2018 at 12:49 PM Ricardo Makino 
>>> wrote:
>>> 
 Hi Rafael,
 
 I am able to join you in the call, when do you expect it happening?
 
 Maybe we can use doodle (https://doodle.com) to check the schedule of
>>> all
 involved in the call.
 
 Best Regards,
 --
 Ricardo Makino
 
 
 On Fri, Dec 21, 2018 at 11:23 AM Rafael Weingärtner <
 rafaelweingart...@gmail.com> wrote:
 
> Hello Folks,
> 
> I have submitted the tracks. The next step now is to schedule a
>> meeting
> with the TDC organizers again. However, at this time, I need some of
>>> you
 in
> the call. We will be discussing channels to spread the word regarding
>>> the
> conference, talks selection process, maybe branding (CCC, Apache
> CloudStack) with the TDC, and so on. Who would be willing to join me
>> in
> this call?
> 
> 
> Here go the details of the tracks that I submitted.
> 
>> *Track Name:* Cloud Orchestration
>> *Track slogan:* Meet the cloud builders and learn how clouds are
 created
>> *Track description:*
>> Computing resources became a commodity, and as such, they are sold
>>> and
>> consumed and billed on demand. Cloud computing provided all of that
 with
>> resiliency, elasticity, and scalability; thus, it is enabling
>>> companies
> to
>> efficiently use computing resources.
>> 
>> The cloud orchestration track will address topics regarding
>> features,
 and
>> cloud orchestration systems design (e.g. CloudStack, and OpenStack)
>>> and
>> cloud data center structure. Moreover, the audience will have the
>> opportunity to meet the people behind the cloud orchestration
>> systems
>> mostly deployed in the world.
>> 
>> *Targeted audience:*
>> he main audience is cloud operators/administrators that are dealing
>>> on
 a
>> daily basis with cloud computing platforms. Other interested
>> parties
 are
>> cloud developers (developers interested in working at the base of
>> the
>> cloud, and not just on consuming cloud resources), cloud
>> consultants
 and
>> business people that work creating, deploying, and maintaining
>> cloud
>> computing environments.
>> 
> 
> *Track Name:* Cloud DevOps
>> *Track slogan:* Get together with other cloud administrators and
>> developers and share your daily hacks!
>> *Track description:*
>> Companies are either creating private clouds or using public clouds
>> (sometimes doing both at the same time). This track provides a
>> space
 for
>> system administrators and developers to share their day-to-day
>> tasks
 and
>> hacks when consuming cloud resources or maintaining cloud
>>> environments.
>> *Targeted audience:*
>> The targeted audience is cloud developers (cloud consumers) or
>> cloud
>> administrators that use in a daily bases cloud resources and or
>> cloud
>> platforms.
>> 
> 
> *Track Name:* Cloud testing and QA
>> *Track slogan:* Let’s find out how people assure quality and cope
>>> with
>> the scale and dynamism of the cloud at the same time
>> *Track description:*
>> We all talk about agile development and the speed, elasticity, and
>> cost-effective nature of cloud environments. It is a challenge for
>> developers, testers and team leaders to provide software quality
> assurance
>> in ever shorter development cycles.
>> 
>> The solution for reducing development cycles, optimizing
>> development
 and
>> testing efforts, and make the most of the cloud is automation.
>> Thus,
>> continuous integration, delivery, and deployment are the buzzwords
>>> that
>> come to save the day of DevOps.
>> 
>> *Targeted audience:*
>> Developers and testers that take advantage cloud resources 

Re: CloudStack DNS names to PowerDNS export

2018-08-21 Thread Rubens Malheiro
Hey Ivan! Thanks! Good job! This is extremely necessary for me.
✓Cloudstack

On Tue, Aug 21, 2018, 4:18 AM Daan Hoogland  wrote:

> hey Ivan, I don't have much experience with anything but bind, but this
> looks nice!
>
> On Tue, Aug 21, 2018 at 7:45 AM, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> wrote:
>
> > Hello, users, devs.
> >
> > We developed a small service for the creation of DNS records (A, ,
> PTR)
> > in the PowerDNS and published it in the GitHub:
> >
> > https://github.com/bwsw/cs-powerdns-integration
> >
> > Licensed under Apache 2 License.
> >
> > *Rationale*
> >
> > CloudStack VR maintains DNS A records for VMs but since VR is an
> ephemeral
> > entity which can be removed and recreated, which IP addresses can be
> > changed, it's inconvenient to use it for zone delegation. Also, it's
> > difficult to pair second DNS server with it as it requires VR hacking.
> So,
> > to overcome those difficulties and provide external users with FQDN
> access
> > to VMs we implemented the solution.
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell: +7-923-414-1515
> > WWW: http://bitworks.software/ 
> >
>
>
>
> --
> Daan
>


Re: KVM Migration storage slowly

2017-11-23 Thread Rubens Malheiro
Hey SImon Hey Rafael,

Thanks for atenttion!

I solved my  problem
It was in the HP controller that the write and read cache was disabled.
But I also noticed that my secondary storage for being gluster has a
considerable loss of speed.
I reinstalled my primary storage and I am  preparing another glusterless
environment for secondary storage since I realized that migrating secondary
storage is not easy
Thank you all!

Sorry my English please :)

On Mon, Nov 20, 2017 at 3:35 PM, Simon Weller <swel...@ena.com.invalid>
wrote:

> Rubens,
>
> Also, you mention MBs (MegaBytes per second) and then reference 10GB in
> regards to network speed (which is really 10Gb, 10 Gigabits per second).
> 300MBs is about 2.4Gbs.
> What type of primary storage are you using?
>
> - Si
> <http://www.linkedin.com/company/15330>
>
>
>
> 
> From: Rafael Weingärtner <rafaelweingart...@gmail.com>
> Sent: Sunday, November 19, 2017 8:30 AM
> To: users@cloudstack.apache.org
> Subject: Re: KVM Migration storage slowly
>
> Hey Rubens,
> This goes without saying, so excuse me if you have already checked these
> parameters... did you check the network parameters? I think there is one or
> two that are used to control this kind of operation.
>
> On Fri, Nov 17, 2017 at 12:30 PM, Rubens Malheiro <
> rubens.malhe...@gmail.com
> > wrote:
>
> > Hello everyone!
> > I am using cloudstack 4.9.2.0 with 5 KVM hosts, however I have problem
> with
> > the migration of disks between storages they are limited to 300MBs in a
> > 10GB network I already checked the network with iperf and I get 10Gb as
> > well as the IOs of the disks. But the copies really are limited to
> 300MBs,
> > has anyone ever been through this?
> >
> > DOC
> >
>
>
>
> --
> Rafael Weingärtner
>


KVM Migration storage slowly

2017-11-17 Thread Rubens Malheiro
Hello everyone!
I am using cloudstack 4.9.2.0 with 5 KVM hosts, however I have problem with
the migration of disks between storages they are limited to 300MBs in a
10GB network I already checked the network with iperf and I get 10Gb as
well as the IOs of the disks. But the copies really are limited to 300MBs,
has anyone ever been through this?

DOC


Re: HA Storage Solution

2017-11-17 Thread Rubens Malheiro
I believe the best solution for replication is GlusterFS, I have a working
although I can never make cloudstack operate natively I use glusterfs and
nfs-ganesha
I'm sorry, my english

On Fri, Nov 17, 2017 at 7:48 AM, William Alianto  wrote:

> Hi,
>
> Sorry for the late reply. So I summarized that NFS with DRDB is enough for
> POC purpose but not recommended for large scale production, where ceph
> would be more suitable.
>
> Thanks for the insight for this problem.
>
> --
> Regards,
>
> William
>
>
> On 09-Nov-17 19:28:49, Ivan Kudryavtsev  wrote:
> Hi. I wouldn't recommend go with HA NFS, because you have to use it in sync
> mode which leads to poor performance.
>
> I suggest using ceph with ssd (or even optane for log devices), because
> it's the thing which is supposed to be clustered and kvm "knows" how to use
> it properly and handle node outages via librados.
>
> But, still, nfs sync over drbd and keepalived should work, but not a
> scalable solution, of course.
>
> 9 нояб. 2017 г. 19:10 пользователь "Makrand"
> написал:
>
> > Hi William,
> >
> > The HA capabilities for storage are offered by OS/Vendor you're using for
> > storage. Different projects use a different solution for storage HA. (HA
> > mostly a paid/premium affair). e.g. for one of the zone, we are using
> LUNs
> > coming from Hitachi storage array. Hitachi array is the enterprise-class
> > thing which has HA built in.
> >
> > If you're just starting and want to test things. I would recommend using
> > something like FreeNAS. This software-defined storage that caters ZFS.
> ZFS
> > offers a software RAID and has lots of features. Plus you can get both
> > NFS/iSCSI.
> >
> > If you're having time in hand, try DRBD (DRBD just replicates the blocks
> > from disk to disk over the network). You create highly available NFS (ACS
> > supports both primary and sec on NFS. Just make sure you have sufficient
> > network bandwidth between hypervisor hosts and storage system) cluster
> > using DRBD on Linux. This would be a bit time consuming to set up.
> >
> >
> >
> >
> > --
> > Makrand
> >
> >
> > On Thu, Nov 9, 2017 at 9:41 AM, William Alianto wrote:
> >
> > > Hi,
> > >
> > > I'm currently still learning about ACS deployment. I have seen about
> the
> > > solution for HA management server and hypervisor, but I still don't
> know
> > > how to create HA storage cluster for Primary and Secondary storage. I
> > would
> > > think of making HA NFS storage using HA proxy or maybe using something
> > like
> > > Ceph Cluster. Would there be any better option for the HA storage
> > solution?
> > >
> > > --
> > > Regards,
> > >
> > > William
> >
>


Re: Quick 1 Question Survey

2017-09-12 Thread Rubens Malheiro
Hello

Cloudstack Management = CentOS 7
KVM = CentOS 7
XEN = None

On Tue, Sep 12, 2017 at 9:12 AM, Rene Moser  wrote:

> What Linux OS and release are you running below your:
>
> * CloudStack/Cloudplatform Management
> * KVM/XEN Hypvervisor Host
>
> Possible answer example
>
> Cloudstack Management = centos6
> KVM/XEN = None, No KVM/XEN
>
> Thanks in advance
>
> Regards
> René
>
>


Re: KVM qcow2 perfomance

2017-08-05 Thread Rubens Malheiro
Wow great explanation! Thank you Eric!
On Sat, 5 Aug 2017 at 14:59 Eric Green  wrote:

> qcow2 performance has been historically bad regardless of the underlying
> storage (it is an absolutely terrible storage format), which is why most
> OpenStack Kilo and later installations instead usually use managed LVM and
> present LVM volumes as iSCSI volumes to QEMU, because using raw LVM volumes
> directly works quite a bit better (especially since you can do "thick"
> volumes, which get you the best performance, without having to zero out a
> large file on disk). But Cloudstack doesn't use that paradigm. Still, you
> can get much better performance with qcow2 regardless:
>
> 1) Create a disk offering that creates 'sparse' qcow2 volumes (the
> 'sparse' provisioning type). Otherwise every write is actually multiple
> writes -- one to extend the previous qcow2 file, one to update the inode
> with the new file size, and one to update the qcow2 file's own notion of
> how long it is and what all of its sections are, and one to write the
> actual data. And these are all *small* random writes, which SSD's have
> historically been bad at due to write zones. Note that if you look at a
> freshly provisioned 'sparse' file in the actual data store, it might look
> like it's taking up 2tb of space, but it's actually taking up only a few
> blocks.
>
> 2) In that disk offering, if you care more about performance than about
> reliability, set the caching mode to 'writeback'. (The default is 'none').
> This will result in larger writes to the SSD, which it'll do at higher
> rates of speeds than small writes. The downside is that your hardware and
> OS better be *ultra* reliable with battery backup and clean shutdown in
> case of power failure and etc., or the data in question is toast if
> something crashes or the power goes out. So consider how important the data
> is before selecting this option.
>
> 3) If you have a lot of time and want to pre-provision your disks in full,
> in that disk offering set the provisioning type to 'fat'. This will
> pre-zero a qcow2 file of the full size that you selected. Be aware that
> Cloudstack does this zeroing of a volume commissioned with this offering
> type *when you attach it to a virtual machine*, not when you create it. So
> attach it to a "trash" virtual machine first before you attach it to your
> "real" virtual machine, unless you want a lot of downtime waiting for it to
> zero. But assuming you have a host filesystem that properly allocates files
> on a per-extent basis, and the extents match up with the underlying SSD
> write block size well, you should be able to get within 5% of hardware
> performance with 'fat' qcow2. (With 'thin' you can still come within 10% of
> that, which is why 'thin' might be the best for most workloads that require
> performance, and 'thin' doesn't waste space on blocks that have never been
> written and doesn't tie up your storage system for hours zeroing out a 2tb
> qcow2 file, so consider that if thinking 'fat').
>
> 4) USE XFS AS THE HOST FILESYSTEM FOR THE DATASTORE. ext4 will be
> *terrible*. I'm not sure what causes the bad will between ext4 on the
> storage host and qcow2, but I've seen it multiple times in my own testing
> of raw libvirt (no CloudStack). As for btrfs, btrfs will be terrible with
> regular 'thin' qcow2. There is an interaction between its write cycles and
> qcow2's write patterns that, as with ext4, causes very slow performance. I
> have not tested sparse qcow2 with btrfs because I don't trust btrfs, it has
> many design decisions reminiscent of ReiserFS, which ate many Linux
> filesystems back during the day. I have not tested ZFS. The ZFS on Linux
> implementation generally has good but not great performance, it was written
> for reliability, not performance, so it seemed a waste of my time to test
> it. I may do that this weekend however just to see. I inherited a PCIe M2.e
> SSD, you see, and want to see what having that as the write cache device
> will do for performance
>
> 5) For the guest filesystem it really depends on your workload and the
> guest OS. I love ext4 for reliability inside a virtual machine, because you
> can't just lose an entire ext4 filesystem (it's based on ext2/ext3, which
> in turn were created when hardware was much less reliable than today and
> thus has a lot of features to keep you from losing an entire filesystem
> just because a few blocks went AWOL), but it's not a very fast filesystem.
> Xfs in my testing has the best performance for virtually all workloads.
> Generally, I use ext4 for root volumes, and make decisions for data volumes
> based upon how important the performance versus reliability equation works
> out for me. I have a lot of ext4 filesystems hanging around for data that
> basically sits there in place without many writes but which I don't want to
> lose.
>
> For best performance of all, manage this SSD storage *outside* of
> Cloudstack as a bunch 

Re: KVM qcow2 perfomance

2017-08-05 Thread Rubens Malheiro
Rodrigo what's a version and system nfs server? Network performance between
host pod and storage it's ok?
On Sat, 5 Aug 2017 at 14:03 Ivan Kudryavtsev 
wrote:

> Qcow2 does lazy allocation. Try to write big file inside VM with dd (say
> 10GB), erase it and try again. May be lazy allocation works bad for your
> raid5e.
>
> 5 авг. 2017 г. 23:29 пользователь "Rodrigo Baldasso" <
> rodr...@loophost.com.br> написал:
>
> > Yes.. mounting an lvm volume inside the host works great, ~500Mb/s write
> > speed.. inside the guest i'm using ext4 but the speed is aroung 30mb/s.
> >
> > - - - - - - - - - - - - - - - - - - -
> >
> > Rodrigo Baldasso - LHOST
> >
> > (51) 9 8419-9861
> > - - - - - - - - - - - - - - - - - - -
> > On 05/08/2017 13:26:00, Ivan Kudryavtsev 
> wrote:
> > Rodrigo, is your fio testing shows great results? What filesystem you are
> > using? KVM is known to work very bad over BTRFS.
> >
> > 5 авг. 2017 г. 23:16 пользователь "Rodrigo Baldasso"
> > rodr...@loophost.com.br> написал:
> >
> > Hi Ivan,
> >
> > In fact i'm testing using local storage.. but on NFS I was getting
> similar
> > results also.
> >
> > Thanks!
> >
> > - - - - - - - - - - - - - - - - - - -
> >
> > Rodrigo Baldasso - LHOST
> >
> > (51) 9 8419-9861
> > - - - - - - - - - - - - - - - - - - -
> > On 05/08/2017 13:03:24, Ivan Kudryavtsev wrote:
> > Hi, Rodrigo. It looks strange. Check your NFSconfiguration and network
> > errors, loss. It should work great.
> >
> > 5 авг. 2017 г. 22:22 пользователь "Rodrigo Baldasso"
> > rodr...@loophost.com.br> написал:
> >
> > Hi everyone,
> >
> > I'm having trouble to archive a good I/O rate using cloudstack qcow2 with
> > any type of caching (or even disabled).
> >
> > We have some RAID-5e SSD arrays which give us a very good rates directly
> on
> > the node/host, but on the guest the speed is terrible.
> >
> > Does anyone knows a solution/workaround for this? I never used qcow (only
> > raw+lvm) so I don't know much to do to solve this.
> >
> > Thanks!
> >
>


Re: [DISCUSS] CloudStack 4.9.3.0 (LTS)

2017-07-12 Thread Rubens Malheiro
What would be very interesting would be to enable the KVM snapshot feature
live

Maybe this would be a barrier to breaking over the limitation of PODS KVM
vs. XEN

And it would be a resource of immediate impact.

Sorry my english via translate

On Wed, Jul 12, 2017 at 12:52 PM, Outback Dingo 
wrote:

> On Wed, Jul 12, 2017 at 10:41 AM, Rohit Yadav 
> wrote:
> > All,
> >
> >
> > Please send me a list of PRs you would like to see in 4.9.3.0 so we can
> freeze the scope for 4.9.3.0, no promises but it may be possible to have a
> release plan as soon as next week.
> >
> >
>
> Support for XenServer 7.1 would be nice
>
>
> > - Rohit
> >
> > 
> > From: Wido den Hollander 
> > Sent: 12 July 2017 01:27:30
> > To: Rohit Yadav; d...@cloudstack.apache.org; users@cloudstack.apache.org
> > Subject: Re: [DISCUSS] CloudStack 4.9.3.0 (LTS)
> >
> > Hi,
> >
> > I would suggest: https://github.com/apache/cloudstack/pull/2131
> >
> > Serious issue with Ubuntu 16.04 and statistics gathering on KVM.
> >
> > Wido
> >
> >> Op 11 juli 2017 om 11:49 schreef Rohit Yadav  >:
> >>
> >>
> >> Hi Sean,
> >>
> >>
> >> Thanks for sharing.
> >>
> >>
> >> - Rohit
> >>
> >> 
> >> From: Sean Lair 
> >> Sent: 11 July 2017 03:41:17
> >> To: d...@cloudstack.apache.org
> >> Cc: users@cloudstack.apache.org
> >> Subject: RE: [DISCUSS] CloudStack 4.9.3.0 (LTS)
> >>
> >> Here are three issues we ran into in 4.9.2.0.  We have been running all
> of these fixes for several months without issues.  The code changes are all
> very easy/small, but had a big impact for us.
> >>
> >> I'd respectfully suggest they go into 4.9.3.0:
> >>
> >> https://github.com/apache/cloudstack/pull/2041 (VR related jobs
> scheduled and run twice on mgmt servers)
> >> https://github.com/apache/cloudstack/pull/2040 (Bug in monitoring of
> S2S VPNs - also exists in 4.10)
> >> https://github.com/apache/cloudstack/pull/1966 (IPSEC VPNs do not work
> after vRouter reboot)
> >>
> >> Thanks
> >> Sean
> >>
> >> rohit.ya...@shapeblue.com
> >> www.shapeblue.com
> >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >> @shapeblue
> >>
> >>
> >>
> >>
> >> -Original Message-
> >> From: Rohit Yadav [mailto:rohit.ya...@shapeblue.com]
> >> Sent: Friday, July 7, 2017 1:14 AM
> >> To: d...@cloudstack.apache.org
> >> Cc: users@cloudstack.apache.org
> >> Subject: [DISCUSS] CloudStack 4.9.3.0 (LTS)
> >>
> >> All,
> >>
> >>
> >> With 4.10.0.0 voted, I would like to start some initial discussion
> around the next minor LTS release 4.9.3.0. At the moment I don't have a
> timeline, plans or dates to share but I would like to engage with the
> community to gather list of issues, commits, PRs that we should consider
> for the next LTS release 4.9.3.0.
> >>
> >>
> >> To reduce our test and QA scope, we don't want to consider changes that
> are new feature, or enhancements but strictly blockers/critical/major
> bugfixes and security related fixes, and we can consider reverting any
> already committed/merged PR(s) on 4.9 branch (committed since 4.9.2.0).
> >>
> >>
> >> Please go through list of commits since 4.9.2.0 (you can also run, git
> log 4.9.2.0..4.9) and let us know if there is any change we should consider
> reverting:
> >>
> >> https://github.com/apache/cloudstack/commits/4.9
> >>
> >>
> >> I started
> backporting some fixes on the 4.9 branch, please go through the following
> PR and raise objections on changes/commits that we should not backport or
> revert:
> >>
> >> https://github.com/apache/cloudstack/pull/2052
> >>
> >>
> >> Lastly, please also share any PRs that we should consider
> reviewing+merging on 4.9 branch for the 4.9.3.0 release effort.
> >>
> >>
> >> - Rohit
> >>
> >> rohit.ya...@shapeblue.com
> >> www.shapeblue.com
> >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
> >>
> >>
> >>
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
>


Re: KVM VM Snapshots

2017-07-11 Thread Rubens Malheiro
Thank you SI
This will be incredible! and revolutionary.

Cloudstack WINS!

On Tue, Jul 11, 2017 at 11:21 AM, Simon Weller <swel...@ena.com.invalid>
wrote:

> The new VM Snapshot functionality for KVM supports disk and memory snaps.
> This means you can recover a VM to a port in time, so assuming that's what
> you are asking, yes it's hot.
>
> It does rely on QCOW2 disk formats right now.
>
>
> - Si
>
>
> ________
> From: Rubens Malheiro <rubens.malhe...@gmail.com>
> Sent: Monday, July 10, 2017 7:44 PM
> To: users@cloudstack.apache.org
> Subject: Re: KVM VM Snapshots
>
> Sorry to mess up
> But a version 4.10 supported snapshot in KVM will it be hot?
> On Mon, 10 Jul 2017 at 21:28 Simon Weller <swel...@ena.com.invalid> wrote:
>
> > Asai,
> >
> > 4.10 was approved last week. It should hit the repos with the next few
> > days.
> >
> > - Si
> >
> > Simon Weller/615-312-6068
> >
> > -Original Message-
> > From: Asai [a...@globalchangemusic.org]
> > Received: Monday, 10 Jul 2017, 4:49PM
> > To: users@cloudstack.apache.org [users@cloudstack.apache.org]
> > Subject: Re: KVM VM Snapshots
> >
> > Rather than 9.10 I meant 4.10.  Rather than 9.2 I meant 4.9.2. Sorry.
> >
> >
> > On 7/10/2017 2:46 PM, Asai wrote:
> > > Greetings,
> > >
> > > Back in January there was a push to integrate the KVM snapshotting
> > > ability into the 9.10 trunk.  I think this did get merged in, but 9.10
> > > doesn't seem to be anywhere near release yet, so wondering if the devs
> > > can push the KVM snapshotting patch into the 9.2 trunk and release as
> > > a minor update?
> > >
> > > Asai
> > >
> >
> >
>


Re: KVM VM Snapshots

2017-07-10 Thread Rubens Malheiro
Sorry to mess up
But a version 4.10 supported snapshot in KVM will it be hot?
On Mon, 10 Jul 2017 at 21:28 Simon Weller  wrote:

> Asai,
>
> 4.10 was approved last week. It should hit the repos with the next few
> days.
>
> - Si
>
> Simon Weller/615-312-6068
>
> -Original Message-
> From: Asai [a...@globalchangemusic.org]
> Received: Monday, 10 Jul 2017, 4:49PM
> To: users@cloudstack.apache.org [users@cloudstack.apache.org]
> Subject: Re: KVM VM Snapshots
>
> Rather than 9.10 I meant 4.10.  Rather than 9.2 I meant 4.9.2. Sorry.
>
>
> On 7/10/2017 2:46 PM, Asai wrote:
> > Greetings,
> >
> > Back in January there was a push to integrate the KVM snapshotting
> > ability into the 9.10 trunk.  I think this did get merged in, but 9.10
> > doesn't seem to be anywhere near release yet, so wondering if the devs
> > can push the KVM snapshotting patch into the 9.2 trunk and release as
> > a minor update?
> >
> > Asai
> >
>
>


Re: Network architecture

2017-07-06 Thread Rubens Malheiro
I'll give you an opniao excuse my English I use Translate.

I recently moved a whole pod with 6 Xen machines to KVM
I'll say it was much quieter and seems to be more stable on both Windows
and Vms in LINUX

But it is necessary to convert the machines in vhd to qcow before deploying.

Works well.

What is really bad are the snapshoots that can be enabled in CLOUDSTACK but
it takes time and the VM is frozen.

I had to migrate XEN because no version recognizes my new 10GB cards

Sorry, my english is more of an opinion.

On Wed, Jul 5, 2017 at 7:36 PM, Grégoire Lamodière 
wrote:

> Dear Paul / Remi,
>
> Thank you for your feedback and the bounding advice.
> We'll go on this direction.
>
> @Remi, you are right about KVM.
> Right now, we still use XenServer because Snapshots and backup solutions.
> If KVM does the job properly, we might make a try on this new zone.
> Do you have any feedback migrating instances from a xenserver zone to a
> kvm zone ? (should we only un-install xentools, export vm as template and
> download in the new zone ? Or is it a more complexe process  ?)
>
> Thanks again.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Paul Angus [mailto:paul.an...@shapeblue.com]
> Envoyé : mercredi 5 juillet 2017 21:05
> À : users@cloudstack.apache.org
> Objet : RE: Network architecture
>
> Hi Grégoire,
>
> With those NICs (and without any other background).  I'd go with bonding
> your 1G NICs together and your 10G NICs together, put primary and secondary
> storage over the 10G.  Mgmt traffic is minimal and spread over all of your
> hosts, so would be public traffic, so these would be fine over the bonded
> 1Gbs links.  Finally guest traffic, this would normally be fine over the
> 1Gb links, especially if you throttle the traffic a little, unless you know
> that you'll have especially high guest traffic.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -Original Message-
> From: Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr]
> Sent: 04 July 2017 21:15
> To: users@cloudstack.apache.org
> Subject: Network architecture
>
> Dear All,
>
> In the process of implementing a new CS advanced zone (4.9.2), I am
> wondering about the best network architecture to implement.
> Any idea / advice would be highly appreciated.
>
> 1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR
> Store is nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ Maximum
> network offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ Hardware Hp
> Blade c7000
>
> Right now, my choice would be :
>
> 1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public
> 2/ Use 1 10Gbe for storage network (operations on sec Store) 3/ Use 1 10
> Gbe for guest traffic (and pr store traffic by design)
>
> This architecture sounds good in terms of performance (using 10 Gbe where
> it makes sense, redundancy on mgmt + public with bound).
>
> Another option would be to bound the 2 10 Gbe interfaces, and use Xen
> Label to manage Storage and guest on the same physical network. This choice
> would give us faileover on storage and guest traffic, but I am wondering if
> performances would be badly affected.
>
> Do you have any feedback on this ?
>
> Thanks all.
>
> Best Regards.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>
>


Re: Alternative Cloudstack UI for KVM and Basic Zones (with SG)

2017-04-25 Thread Rubens Malheiro
Hey this is nice! Congratulations for this work!!!


WIN/WIN

101%Cloudstack

> On 25 Apr 2017, at 08:17, John Adams  wrote:
> 
> This is great!!!
> 
> 
> --John O. Adams
> 
> On 25 April 2017 at 10:11, Ivan Kudryavtsev 
> wrote:
> 
>> Hello, Cloudstack community.
>> 
>> We are proud to present our last development effort to you. During the last
>> 5 months we spend some time to develop alternative Cloudstack UI for basic
>> zones with KVM hypervisor and security groups. This is basically the thing
>> we are using in our clouds. During the design of the software we tried to
>> fulfill the expectations of our average cloud users and simplify operations
>> as much as possible.
>> 
>> The project is OSS and can be found at GitHub with bunch of screenshots and
>> deployment guide. It's under active development so, we will ge glad if you
>> join and provide us with additional feedback, UX considerations and other
>> interesting information.
>> 
>> Project page at GitHub: https://bwsw.github.io/cloudstack-ui/
>> Source code: https://github.com/bwsw/cloudstack-ui
>> 
>> Have a good day. Looking forward hearing your feedback.
>> 
>> --
>> With best regards, Ivan Kudryavtsev
>> Bitworks Software, Ltd.
>> Cell: +7-923-414-1515
>> WWW: http://bw-sw.com/
>> 


Re: [VOTE] Retirement of midonet plugin

2017-03-28 Thread Rubens Malheiro
+1

Sent from my iPhone

> On 28 Mar 2017, at 17:55, Sergey Levitskiy  
> wrote:
> 
> +1
> 


Re: Welcoming Wido as the new ACS VP

2017-03-17 Thread Rubens Malheiro
Hey Stevens thanks for stable work

Hollander welcome a new challenge! Good Luck!

See you in Miami!! I like a beer!

WIN/WIN
CLOUDSTACK


2017-03-17 7:20 GMT-03:00 Giles Sirett :

> Will - many thanks for your hard work over the past 12 months.
>
> Congrats Wido.
>
>
> Kind regards
> Giles
>
> giles.sir...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@shapeblue.com]
> Sent: 17 March 2017 08:13
> To: d...@cloudstack.apache.org; users@cloudstack.apache.org
> Subject: Re: Welcoming Wido as the new ACS VP
>
> Thanks to both of you great Ws. Have a good retirement Will! Good luck in
> your new capacity Wido!
>
> On 17/03/17 08:32, "Paul Angus"  wrote:
>
> Thanks Will for all the great work. And congratulations Wido - good
> luck.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Raja Pullela [mailto:raja.pull...@accelerite.com]
> Sent: 17 March 2017 02:59
> To: d...@cloudstack.apache.org; users@cloudstack.apache.org
> Subject: Re: Welcoming Wido as the new ACS VP
>
> Thank you Will for all the great work!
>
> cheers!
> Raja Pullela
> Engineering Team,
> Accelerite, 2055 Laurelwood Road,
> Santa Clara, CA, 95054
>
> On 3/16/17, 10:30 PM, "Will Stevens"  wrote:
>
> Hello Everyone,
> It has been a pleasure working with you as the ACS VP over the past
> year.
> I would like to say Thank You to everyone who has supported me in this
> role and have supported the project as a whole.
>
> It is my pleasure to announce that Wido den Hollander has been voted
> in to replace me as the Apache Cloudstack VP in our annual VP rotation.
> Wido has a long history with the project and we are happy welcome him into
> this new role.
>
> Be sure to join us at CCC in Miami [1] so we can initiate him
> correctly over many beers.  :)
>
> Cheers,
>
> *Will Stevens*
>
> ​[1] http://us.cloudstackcollab.org/​
>
>
>
>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which
> is the property of Accelerite, a Persistent Systems business. It is
> intended only for the use of the individual or entity to which it is
> addressed. If you are not the intended recipient, you are not authorized to
> read, retain, copy, print, distribute or use this message. If you have
> received this communication in error, please notify the sender and delete
> all copies of this message. Accelerite, a Persistent Systems business does
> not accept any liability for virus infected mails.
>
>
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>


Re: QOS STORAGE NFS XENSERVER

2017-03-08 Thread Rubens Malheiro
Hello!
I use cloudstack 9.2 and enabled the qos per disk tx in the offers but the
vms still continue to record with maximum rate.
Thank you

2017-03-08 1:01 GMT-03:00 Rafael Weingärtner <rafaelweingart...@gmail.com>:

> I would like to start asking, what is the ACs version?
> Also, what do you mean by enabling QOS? are you talking about the
> configuration of IOP/s in the service offering?
>
>
> On Tue, Mar 7, 2017 at 8:49 PM, Rubens Malheiro <rubens.malhe...@gmail.com
> >
> wrote:
>
> > Hello everyone. I'm using Google Translate.
> > I'm sorry.
> > But I'd like some help with traffic control for the storages.
> > I have now a zone configured with XENSERVER and using as a storage
> FREENAS
> > only I have a problem of disk transfer rate since one can consume all the
> > traffic of another vm.
> > Even when I enable QOS in the service offer it does not limit I already
> > tested with ISCSI and NFS but I do not get results if someone understands
> > or can help me.
> > And another detail in the metrics with XENSERVER does not show IOPS only
> on
> > KVM hosts.
> >
> > Thank you all
> >
>
>
>
> --
> Rafael Weingärtner
>


QOS STORAGE NFS XENSERVER

2017-03-07 Thread Rubens Malheiro
Hello everyone. I'm using Google Translate.
I'm sorry.
But I'd like some help with traffic control for the storages.
I have now a zone configured with XENSERVER and using as a storage FREENAS
only I have a problem of disk transfer rate since one can consume all the
traffic of another vm.
Even when I enable QOS in the service offer it does not limit I already
tested with ISCSI and NFS but I do not get results if someone understands
or can help me.
And another detail in the metrics with XENSERVER does not show IOPS only on
KVM hosts.

Thank you all


Glusterfs and Cloudstack

2016-07-17 Thread Rubens Malheiro
Hello everyone.
I'm testing the glusterfs in CloudStack.
However if you use the wizard to add the system it does not work however I
realized that the wizard tries to add as ISCIS even selected glusterfs.
To manually add the storage was successful.
However the start systemsvms not checked the storagevm uses 3.6 glusterfs
and I am using 3.8.
I wonder if someone uses glusterfs?
It is possible to update the systemvm package for the 3.8 glusterfs?
I'm using CloudStack 4.8.0.1
Thank you.

Sorry for my english I'm using google translate.

Doc.Holliday
101% CloudStack


Re: LTS release or not

2016-01-09 Thread Rubens Malheiro
+1
Em 9 de jan de 2016 8:55 PM, "Rene Moser"  escreveu:

> Hi
>
> I recently started a discussion about the current release process. You
> may have noticed that CloudStack had a few releases in the last 2 months.
>
> My concerns were that many CloudStack users will be confused about these
> many releases (which one to take? Are fixes backported? How long will it
> receive fixes? Do I have to upgrade?).
>
> We leads me to the question: Does CloudStack need an LTS version? To me
> it would make sense in many ways:
>
> * Users in restrictive cloud environments can choose LTS for getting
> backwards compatible bug fixes only.
>
> * Users in agile cloud environments can choose latest stable and getting
> new features fast.
>
> * CloudStack developers must only maintain the latest stable (mainline)
> and the LTS version.
>
> * CloudStack developers and mainline users can accept, that mainline may
> break environments but will receive fast forward fixes.
>
> To me this would make a lot of sense. I am actually thinking about
> maintaining 4.5 as a LTS by myself.
>
> Any thoughts? +1/-1?
>
> Regards
> René
>


Re: SystemVMs 4.7

2015-12-30 Thread Rubens Malheiro
Thanks  Daan.



Doc. Holliday
101% CloudStack 



> Em 30 de dez de 2015, à(s) 11:28, Daan Hoogland <daan.hoogl...@gmail.com> 
> escreveu:
> 
> you can use the 4.6 ones.
> 
> On Wed, Dec 30, 2015 at 2:18 PM, Rubens Malheiro <rubens.malhe...@gmail.com>
> wrote:
> 
>> Hello everyone!
>> Doubt :)
>> The system vms for CloudStack 4.7 are not available?
>> Thank you!
>> 
>> 
>> Doc. Holliday
>> 101% CloudStack
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Daan



SystemVMs 4.7

2015-12-30 Thread Rubens Malheiro
Hello everyone!
Doubt :)
The system vms for CloudStack 4.7 are not available?
Thank you!


Doc. Holliday
101% CloudStack 





Re: Multiples Publics Networks

2015-12-07 Thread Rubens Malheiro
ok Dag

Thanks for the help

Great already solves me


Doc. Holliday
101% CloudStack 



> Em 4 de dez de 2015, à(s) 11:47, Dag Sonstebo <dag.sonst...@shapeblue.com> 
> escreveu:
> 
> Hi Rubens,
> 
> if you create a shared network with the default shared network offering you 
> rely on a gateway external to CloudStack and the VR, I.e. the virtual router 
> only supplies DHCP, DNS and userdata - not L3 routing. You can obviously use 
> one of the other shared network offerings or create your own to match your 
> use case.
> 
> Not sure if I follow your second question - could you elaborate?
> 
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> 
> 
> 
> 
> 
> 
> 
> On 01/12/2015, 17:23, "Rubens Malheiro" <rubens.malhe...@gmail.com> wrote:
> 
>> Thank Dag
>> Got it.
>> But if I use shared networks I will have problems since the virtual router 
>> had been with the IP address of the firewall on the LAN that would be the 
>> gateway. If I disable the management services of the virtual router I have 
>> to manually manage the placement.
>> My idea would be that the VPC were able to allocate different public 
>> networks for NAT statics. I see that I can do this because, not by field. 
>> Does the IPS portables manages to come fix this?
>> Thank you.
>> 
>> 
>> Doc. Holliday
>> 101% CloudStack
>> 
>>> Em 1 de dez de 2015, à(s) 15:02, Dag Sonstebo <dag.sonst...@shapeblue.com> 
>>> escreveu:
>>> 
>>> Hi Rubens,
>>> 
>>> You can add multiple sets of public IP address space / VLAN combinations to 
>>> the public network, but you can not run multiple public networks as such.
>>> 
>>> What you can do is to present additional shared networks to clients - these 
>>> can be public or private IP ranges. Keep in mind however that if you 
>>> present public IP ranges as shared networks that you may encounter security 
>>> issues on your VMs.
>>> 
>>> Regards,
>>> Dag Sonstebo
>>> Cloud Architect
>>> ShapeBlue
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On 01/12/2015, 16:26, "Rubens Malheiro" <rubens.malhe...@gmail.com> wrote:
>>> 
>>>> Hello everyone!
>>>> 
>>>> it is possible to run multiple PUBLIC networks and assign different 
>>>> isolated networks?
>>>> 
>>>> I say this because I use privately CloudStack and own several VLANs would 
>>>> only expose some addresses the VMS. To do this using the virtual router 
>>>> have problem why GATEWAY IP conflicts with my physical gateway.
>>>> 
>>>> 
>>>> Thank you
>>>> 
>>>> Sorry my English is to use google translate to write.
>>>> 
>>>> 
>>>> 101% Cloudstack
>>> Find out more about ShapeBlue and our range of CloudStack related services
>>> 
>>> IaaS Cloud Design & 
>>> Build<http://shapeblue.com/iaas-cloud-design-and-build//>
>>> CSForge – rapid IaaS deployment framework<http://shapeblue.com/csforge/>
>>> CloudStack Consulting<http://shapeblue.com/cloudstack-consultancy/>
>>> CloudStack Software 
>>> Engineering<http://shapeblue.com/cloudstack-software-engineering/>
>>> CloudStack Infrastructure 
>>> Support<http://shapeblue.com/cloudstack-infrastructure-support/>
>>> CloudStack Bootcamp Training 
>>> Courses<http://shapeblue.com/cloudstack-training/>
>>> 
>>> This email and any attachments to it may be confidential and are intended 
>>> solely for the use of the individual to whom it is addressed. Any views or 
>>> opinions expressed are solely those of the author and do not necessarily 
>>> represent those of Shape Blue Ltd or related companies. If you are not the 
>>> intended recipient of this email, you must neither take any action based 
>>> upon its contents, nor copy or show it to anyone. Please contact the sender 
>>> if you believe you have received this email in error. Shape Blue Ltd is a 
>>> company incorporated in England & Wales. ShapeBlue Services India LLP is a 
>>> company incorporated in India and is operated under license from Shape Blue 
>>> Ltd. Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil 
>>> and is operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is 
>>> a company registered by The Republic of

Re: Multiples Publics Networks

2015-12-01 Thread Rubens Malheiro
Thank Dag
Got it.
But if I use shared networks I will have problems since the virtual router had 
been with the IP address of the firewall on the LAN that would be the gateway. 
If I disable the management services of the virtual router I have to manually 
manage the placement.
My idea would be that the VPC were able to allocate different public networks 
for NAT statics. I see that I can do this because, not by field. Does the IPS 
portables manages to come fix this?
Thank you.


Doc. Holliday
101% CloudStack 

> Em 1 de dez de 2015, à(s) 15:02, Dag Sonstebo <dag.sonst...@shapeblue.com> 
> escreveu:
> 
> Hi Rubens,
> 
> You can add multiple sets of public IP address space / VLAN combinations to 
> the public network, but you can not run multiple public networks as such.
> 
> What you can do is to present additional shared networks to clients - these 
> can be public or private IP ranges. Keep in mind however that if you present 
> public IP ranges as shared networks that you may encounter security issues on 
> your VMs.
> 
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> 
> 
> 
> 
> 
> 
> 
> On 01/12/2015, 16:26, "Rubens Malheiro" <rubens.malhe...@gmail.com> wrote:
> 
>> Hello everyone!
>> 
>> it is possible to run multiple PUBLIC networks and assign different isolated 
>> networks?
>> 
>> I say this because I use privately CloudStack and own several VLANs would 
>> only expose some addresses the VMS. To do this using the virtual router have 
>> problem why GATEWAY IP conflicts with my physical gateway.
>> 
>> 
>> Thank you
>> 
>> Sorry my English is to use google translate to write.
>> 
>> 
>> 101% Cloudstack
> Find out more about ShapeBlue and our range of CloudStack related services
> 
> IaaS Cloud Design & Build<http://shapeblue.com/iaas-cloud-design-and-build//>
> CSForge – rapid IaaS deployment framework<http://shapeblue.com/csforge/>
> CloudStack Consulting<http://shapeblue.com/cloudstack-consultancy/>
> CloudStack Software 
> Engineering<http://shapeblue.com/cloudstack-software-engineering/>
> CloudStack Infrastructure 
> Support<http://shapeblue.com/cloudstack-infrastructure-support/>
> CloudStack Bootcamp Training 
> Courses<http://shapeblue.com/cloudstack-training/>
> 
> This email and any attachments to it may be confidential and are intended 
> solely for the use of the individual to whom it is addressed. Any views or 
> opinions expressed are solely those of the author and do not necessarily 
> represent those of Shape Blue Ltd or related companies. If you are not the 
> intended recipient of this email, you must neither take any action based upon 
> its contents, nor copy or show it to anyone. Please contact the sender if you 
> believe you have received this email in error. Shape Blue Ltd is a company 
> incorporated in England & Wales. ShapeBlue Services India LLP is a company 
> incorporated in India and is operated under license from Shape Blue Ltd. 
> Shape Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
> operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
> registered by The Republic of South Africa and is traded under license from 
> Shape Blue Ltd. ShapeBlue is a registered trademark.



Multiples Publics Networks

2015-12-01 Thread Rubens Malheiro
Hello everyone!

it is possible to run multiple PUBLIC networks and assign different isolated 
networks?

I say this because I use privately CloudStack and own several VLANs would only 
expose some addresses the VMS. To do this using the virtual router have problem 
why GATEWAY IP conflicts with my physical gateway.


Thank you

Sorry my English is to use google translate to write.


101% Cloudstack