Re: Experience on GPU Support?

2024-02-23 Thread Ivan Kudryavtsev
Another way to deal with it is to use KVM agent hooks:
https://github.com/apache/cloudstack/blob/8f6721ed4c4e1b31081a951c62ffbe5331cf16d4/agent/conf/agent.properties#L123

You can implement the logic in Groovy to modify XML during the start to
support extra devices out of CloudStack management.

On Fri, Feb 23, 2024 at 2:36 PM Jorge Luiz Correa
 wrote:

> Hi Bryan! We are using here but in a different way, customized for our
> environment and using how it is possible the features of CloudStack. In
> documentation we can see support for some GPU models a little bit old
> today.
>
> We are using pci passthrough. All hosts with GPU are configured to boot
> with IOMMU and vfio-pci, not loading kernel modules for each GPU.
>
> Then, we create a serviceoffering to describe VMs that will have GPU. In
> this serviceoffering we use the serviceofferingdetails[1].value field to
> insert a block of configuration related to the GPU. It is something like
> " ...  ... address type=pci" that describes the PCI bus
> from each GPU. Then, we use tags to force this computeoffering to run only
> in hosts with GPUs.
>
> We create a Cloudstack cluster with a lot of hosts equipped with GPUs. When
> a user needs a VM with GPU he/she should use the created computeoffering.
> VM will be instantiated in some host of the cluster and GPUs are
> passthrough to VM.
>
> There are no control executed by cloudstack. For example, it can try to
> instantiate a VM in a host when a GPU is already being used (will fail).
> Our management is that the ROOT admin always controls that creation. We
> launch all VMs using all GPUs from the infrastructure. Then we use a queue
> manager to run jobs in those VMs with GPUs. When a user needs a dedicated
> VM to develop something, we can shutdown a VM already running (that is part
> of the queue manager as processor node) and then create this dedicated VM,
> that uses the GPUs isolated.
>
> There are some possibilities when using GPUs. For example, some models
> accept virtualization when we can divide a GPU. In that case, Cloudstack
> would need to support that, so it would manage the driver, creating the
> virtual GPUs based on information input from the user, as memory size.
> Then, it should manage the hypervisor to passthrough the virtual gpu to VM.
>
> Another possibility that would help us in our scenario is to make some
> control about PCI buses in hosts. For example, if Cloustack could check if
> a PCI is being used in some host and then use this information in VM
> scheduling, would be great. Cloudstack could launch VMs in a host that has
> a PCI address free. This would be used not only for GPUs, but any PCI
> device.
>
> I hope this can help in some way, to think of new scenarios etc.
>
> Thank you!
>
> Em qui., 22 de fev. de 2024 às 07:56, Bryan Tiang <
> bryantian...@hotmail.com>
> escreveu:
>
> > Hi Guys,
> >
> > Anyone running Cloudstack with GPU Support in Production? Say NVIDIA H100
> > or AMD M1300X?
> >
> > Just want to know if there is any support for this still on going, or
> > anyone who is running a cloud business with GPUs.
> >
> > Regards,
> > Bryan
> >
>
> --
> __
> Aviso de confidencialidade
>
> Esta mensagem da
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica
> federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro
> de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter
> informacoes  confidenciais, protegidas  por sigilo profissional.  Sua
> utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei.
> Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao
> emitente, esclarecendo o equivoco.
>
> Confidentiality note
>
> This message from
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government
> company  established under  Brazilian law (5.851/72), is directed
> exclusively to  its addressee  and may contain confidential data,
> protected under  professional secrecy  rules. Its unauthorized  use is
> illegal and  may subject the transgressor to the law's penalties. If you
> are not the addressee, please send it back, elucidating the failure.
>


Re: Proxmox and cloudstack

2023-11-21 Thread Ivan Kudryavtsev
However, it could be a great option to have Cloudstack as an orchestration
mechanism for Proxmox working on Proxmox tooling without highly obscure and
hard-to-troubleshoot Cloudstack router, savm, cpvm. Just dreaming :)

Вт, 21 нояб. 2023 г. в 16:59, Ivan Kudryavtsev :

> Cloudstack has custom compute offering, it allows for changing cores, ram,
> etc. But, in general, Proxmox can be more convenient for small deployments
> when a single server is enough.
>
> Вт, 21 нояб. 2023 г. в 16:56, Hean Seng :
>
>> Proxmox everything is good and convenience . The only issue is do not have
>> IP management where you have to manage your own IP allocation manually.
>>
>> Cloudstack have very difficult issue. is on Changing compute offering. /
>> Scalling of a VM .   Example,if you want to upgrade the ram from eg 3G to
>> 4G,  or 1Core to 2 core etc, there is no fast way to do, you have to
>> create
>> a new compute offering and then apply to a VM.   If you need frequently to
>> change this value, i think is a nightmare
>>
>> Cloudstack is good for mass creating vm with same spec.  Proxmox is to
>> configure it one by one.
>>
>>
>>
>>
>>
>>
>> On Tue, Nov 21, 2023 at 8:40 PM Ivan Kudryavtsev  wrote:
>>
>> > Hi, no problem at all.
>> >
>> > Вт, 21 нояб. 2023 г. в 16:30, Gary Dixon > > .invalid>:
>> >
>> > > I believe Windows based VM's in Proxmox have an issue on booting up
>> > > properly when on KVM hosts. We are also seeing this in Cloudstack
>> > >
>> > >
>> > > Gary Dixon​
>> > > Senior Technical Consultant
>> > > 0161 537 4980 <0161%20537%204980>
>> > >  +44 7989717661 <+44%207989717661>
>> > > gary.di...@quadris.co.uk
>> > > www.quadris.com
>> > > Innovation House, 12‑13 Bredbury Business Park
>> > > Bredbury Park Way, Bredbury, Stockport, SK6 2SN
>> > > -Original Message-
>> > > From: Francisco Arencibia Quesada 
>> > > Sent: Tuesday, November 21, 2023 12:10 PM
>> > > To: users@cloudstack.apache.org
>> > > Subject: Proxmox and cloudstack
>> > >
>> > > Morning guys,
>> > >
>> > > Has anyone tested the compatibility between proxmox and cloudstack.
>> > > Cloudstack does support KVM, and proxmox uses kvm, but I would like to
>> > > hear some feedbacks.
>> > >
>> > >
>> > > Thanks as asual
>> > > Regards
>> > >
>> >
>>
>>
>> --
>> Regards,
>> Hean Seng
>>
>


Re: Proxmox and cloudstack

2023-11-21 Thread Ivan Kudryavtsev
Cloudstack has custom compute offering, it allows for changing cores, ram,
etc. But, in general, Proxmox can be more convenient for small deployments
when a single server is enough.

Вт, 21 нояб. 2023 г. в 16:56, Hean Seng :

> Proxmox everything is good and convenience . The only issue is do not have
> IP management where you have to manage your own IP allocation manually.
>
> Cloudstack have very difficult issue. is on Changing compute offering. /
> Scalling of a VM .   Example,if you want to upgrade the ram from eg 3G to
> 4G,  or 1Core to 2 core etc, there is no fast way to do, you have to create
> a new compute offering and then apply to a VM.   If you need frequently to
> change this value, i think is a nightmare
>
> Cloudstack is good for mass creating vm with same spec.  Proxmox is to
> configure it one by one.
>
>
>
>
>
>
> On Tue, Nov 21, 2023 at 8:40 PM Ivan Kudryavtsev  wrote:
>
> > Hi, no problem at all.
> >
> > Вт, 21 нояб. 2023 г. в 16:30, Gary Dixon  > .invalid>:
> >
> > > I believe Windows based VM's in Proxmox have an issue on booting up
> > > properly when on KVM hosts. We are also seeing this in Cloudstack
> > >
> > >
> > > Gary Dixon​
> > > Senior Technical Consultant
> > > 0161 537 4980 <0161%20537%204980>
> > >  +44 7989717661 <+44%207989717661>
> > > gary.di...@quadris.co.uk
> > > www.quadris.com
> > > Innovation House, 12‑13 Bredbury Business Park
> > > Bredbury Park Way, Bredbury, Stockport, SK6 2SN
> > > -Original Message-
> > > From: Francisco Arencibia Quesada 
> > > Sent: Tuesday, November 21, 2023 12:10 PM
> > > To: users@cloudstack.apache.org
> > > Subject: Proxmox and cloudstack
> > >
> > > Morning guys,
> > >
> > > Has anyone tested the compatibility between proxmox and cloudstack.
> > > Cloudstack does support KVM, and proxmox uses kvm, but I would like to
> > > hear some feedbacks.
> > >
> > >
> > > Thanks as asual
> > > Regards
> > >
> >
>
>
> --
> Regards,
> Hean Seng
>


Re: Proxmox and cloudstack

2023-11-21 Thread Ivan Kudryavtsev
Hi, no problem at all.

Вт, 21 нояб. 2023 г. в 16:30, Gary Dixon :

> I believe Windows based VM's in Proxmox have an issue on booting up
> properly when on KVM hosts. We are also seeing this in Cloudstack
>
>
> Gary Dixon​
> Senior Technical Consultant
> 0161 537 4980 <0161%20537%204980>
>  +44 7989717661 <+44%207989717661>
> gary.di...@quadris.co.uk
> www.quadris.com
> Innovation House, 12‑13 Bredbury Business Park
> Bredbury Park Way, Bredbury, Stockport, SK6 2SN
> -Original Message-
> From: Francisco Arencibia Quesada 
> Sent: Tuesday, November 21, 2023 12:10 PM
> To: users@cloudstack.apache.org
> Subject: Proxmox and cloudstack
>
> Morning guys,
>
> Has anyone tested the compatibility between proxmox and cloudstack.
> Cloudstack does support KVM, and proxmox uses kvm, but I would like to
> hear some feedbacks.
>
>
> Thanks as asual
> Regards
>


Re: VM Performance after Template Upload

2022-09-07 Thread Ivan Kudryavtsev
Hi, the qcow2 volumes may be thin, sparse, full. Maybe the fs is slow, so
while the image does cow allocations, the performance degrades? Try to make
a full qcow2 image by:

qemu-img create -o preallocation=full -f qcow2

And check again.


ср, 7 сент. 2022 г., 16:26 Mevludin Blazevic :

> Hi all,
>
> some of our users have reported that after they uploaded QCOW2 templates
> to our ACS environment and started a VM from the template, the VMs ran
> very slow. In contrast, VMs installed directly in ACS using ISOs for
> example are very fast. I wonder if something in the upload view was
> misconfigured by the users.
>
> We are using KVM and in the upload view we can choose Root disk
> controller, OS Type and other options like enabling HVM. Any ideas?
>
> Best Regards
>
> Mevludin
>
>


Re: Web Console noVNC Black Screen

2022-09-06 Thread Ivan Kudryavtsev
I mean the space inside cpvm instance filesystem, not on hv filesystem.

вт, 6 сент. 2022 г., 19:34 Bs Serge :

> Not the issue, coz I have enough disk space and RAM in hosts and VMs,
>
> I checked using 'df'  and 'htop' commands
>
> I can create new instances with no problem, I just can't open them in the
> web console
>
> Best regards,
>
>
> On Tue, Sep 6, 2022 at 5:04 PM Ivan Kudryavtsev  wrote:
>
> > I met the out of space situation with similar symptoms.
> >
> > вт, 6 сент. 2022 г., 19:02 Bs Serge :
> >
> > > These are logs inside the console system VM
> > >
> > >
> > >
> >
> https://paste.0xfc.de/?1d875169513dda9e#gSKvHctUwr5je9DLCpUCxizgMkDoW4DK6jf4GXnP32k
> > >
> > > Best Regards,
> > >
> > >
> > > On Tue, Sep 6, 2022 at 4:54 PM Bs Serge  wrote:
> > >
> > > > Hi all,
> > > > Cloudstack: 4.15.0
> > > > Centos 8
> > > > Hypervisor: KVM
> > > >
> > > > Hi all,
> > > >
> > > > Previously the web console was working fine for a long time with no
> > > > problem until recently it started showing a black screen as shown
> here
> > > > https://ibb.co/ZTn46T5 and there are no changes or upgrades
> performed.
> > > >
> > > > The instances and system VMs are running without a problem, The
> console
> > > > proxy does not have a public IP address, This is all in a private
> > > network.
> > > >
> > > > I can SSH inside the console proxy and see that it is running
> services
> > on
> > > > ports 80,8080,8001 and 3922,
> > > >
> > > > I can telnet the 80 and 8080 ports from other hosts with no problem
> and
> > > > this is the web console returned URL
> > > >
> > > >
> > >
> >
> http://consoleproxy-ip-0.0.0./resource/noVNC/vnc.html?port=8080=x
> > > >
> > > > Any thoughts or comments would be appreciated!
> > > >
> > > > Best regards,
> > > >
> > > >
> > >
> >
>


Re: Web Console noVNC Black Screen

2022-09-06 Thread Ivan Kudryavtsev
I met the out of space situation with similar symptoms.

вт, 6 сент. 2022 г., 19:02 Bs Serge :

> These are logs inside the console system VM
>
>
> https://paste.0xfc.de/?1d875169513dda9e#gSKvHctUwr5je9DLCpUCxizgMkDoW4DK6jf4GXnP32k
>
> Best Regards,
>
>
> On Tue, Sep 6, 2022 at 4:54 PM Bs Serge  wrote:
>
> > Hi all,
> > Cloudstack: 4.15.0
> > Centos 8
> > Hypervisor: KVM
> >
> > Hi all,
> >
> > Previously the web console was working fine for a long time with no
> > problem until recently it started showing a black screen as shown here
> > https://ibb.co/ZTn46T5 and there are no changes or upgrades performed.
> >
> > The instances and system VMs are running without a problem, The console
> > proxy does not have a public IP address, This is all in a private
> network.
> >
> > I can SSH inside the console proxy and see that it is running services on
> > ports 80,8080,8001 and 3922,
> >
> > I can telnet the 80 and 8080 ports from other hosts with no problem and
> > this is the web console returned URL
> >
> >
> http://consoleproxy-ip-0.0.0./resource/noVNC/vnc.html?port=8080=x
> >
> > Any thoughts or comments would be appreciated!
> >
> > Best regards,
> >
> >
>


Re: DNS register injection

2022-05-04 Thread Ivan Kudryavtsev
Hi, have some experience with customizing DNS in ACS, but don't get what do
you try to achieve?

ср, 4 мая 2022 г., 5:47 PM Ricardo Pertuz :

> Hi all,
>
> Is there any effort to enable adding custom dns registers to dnsmasq on
> the VR?
>
> BR,
>
> Ricardo
>


Re: Database High Availability

2022-05-03 Thread Ivan Kudryavtsev
Well,

You would better consult with real-life mysql experts, as for me, I
referred to great severalnines.com articles like:
https://severalnines.com/resources/database-management-tutorials/galera-cluster-mysql-tutorial
https://severalnines.com/database-blog/avoiding-deadlocks-galera-setting-haproxy-single-node-writes-and-multi-node-reads

Just take a look for best practices there.



On Tue, May 3, 2022 at 10:18 AM Jayanth Reddy 
wrote:

> Hi,
>
>  Thanks again for the tips! Below is the current configuration, please
> suggest changes if any.
>
>  HAProxy 
>
> frontend galera-fe
> mode tcp
> bind 10.231.4.112:3306
> use_backend galera-be
>
> backend galera-be
> balance source
> mode tcp
> option tcpka
> option mysql-check user haproxy
> server galera-0 10.231.4.36:3306 check
> server galera-1 10.231.4.37:3306 check
> server galera-2 10.231.4.38:3306 check
>
>  Keepalived 
>
> vrrp_script check_backend {
> script "killall -0 haproxy"
> weight -20
> interval 2
> rise 2
> fall 2
> }
>
> vrrp_instance DB_0 {
>   state MASTER  # BACKUP on others
>   priority 100
>   interface enp1s0
>   virtual_router_id 50
>   advert_int 1
>   unicast_peer {
> 10.231.4.87 # Relevant on others
> 10.231.4.88 # Relevant on others
>   }
>   virtual_ipaddress {
>     10.231.4.112/24
>   }
>   track_script {
>   check_backend
>   }
> }
>
> Best Regards,
> Jayanth
>
> On Tue, May 3, 2022 at 12:33 PM Ivan Kudryavtsev  wrote:
>
> > Sounds cool,
> >
> > Just ensure that in any failure case (db, haproxy, OS or hardware crash)
> > all the Management servers are switched to the same Galera instance,
> > otherwise, this could lead to operational problems.
> > Also, backups are still mandatory, recommend doing them from one of
> > Galera's hot-swap nodes, not from the main operational node.
> >
> > Best wishes, Ivan
> >
> > On Tue, May 3, 2022 at 9:53 AM Jayanth Reddy  >
> > wrote:
> >
> > > Hi,
> > >
> > > Thank you. Have set up MariaDB Galera Cluster with the required
> > HAProxy
> > > configuration with MYSQL health checks.  Everything is working fine.
> > >
> > > On Mon, May 2, 2022 at 10:48 AM Ivan Kudryavtsev 
> wrote:
> > >
> > > > Hi, I use MariaDB Galera cluster.
> > > >
> > > > But you have to pin all the CS management to the same galera node to
> > make
> > > > cloudstack transactioned operations work correctly. HAproxy or shared
> > > > common ip solve that.
> > > >
> > > > пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy  >:
> > > >
> > > > > Hello guys,
> > > > >
> > > > > How are you doing database High Availability? Any inputs on DB
> > > > > Clustering and CloudStack configuration would really help me.
> > > > >
> > > >
> > >
> >
>


Re: Database High Availability

2022-05-03 Thread Ivan Kudryavtsev
Sounds cool,

Just ensure that in any failure case (db, haproxy, OS or hardware crash)
all the Management servers are switched to the same Galera instance,
otherwise, this could lead to operational problems.
Also, backups are still mandatory, recommend doing them from one of
Galera's hot-swap nodes, not from the main operational node.

Best wishes, Ivan

On Tue, May 3, 2022 at 9:53 AM Jayanth Reddy 
wrote:

> Hi,
>
> Thank you. Have set up MariaDB Galera Cluster with the required HAProxy
> configuration with MYSQL health checks.  Everything is working fine.
>
> On Mon, May 2, 2022 at 10:48 AM Ivan Kudryavtsev  wrote:
>
> > Hi, I use MariaDB Galera cluster.
> >
> > But you have to pin all the CS management to the same galera node to make
> > cloudstack transactioned operations work correctly. HAproxy or shared
> > common ip solve that.
> >
> > пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy :
> >
> > > Hello guys,
> > >
> > > How are you doing database High Availability? Any inputs on DB
> > > Clustering and CloudStack configuration would really help me.
> > >
> >
>


Re: Database High Availability

2022-05-01 Thread Ivan Kudryavtsev
Hi, I use MariaDB Galera cluster.

But you have to pin all the CS management to the same galera node to make
cloudstack transactioned operations work correctly. HAproxy or shared
common ip solve that.

пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy :

> Hello guys,
>
> How are you doing database High Availability? Any inputs on DB
> Clustering and CloudStack configuration would really help me.
>


Re: Local Storage Question

2022-02-09 Thread Ivan Kudryavtsev
Hi, you can do that safely. Maybe restarting agents is required, but
nothing that stops the service.

On Wed, Feb 9, 2022 at 2:51 PM Edward St Pierre 
wrote:

> Hi,
>
> I have an existing KVM cluster and am looking to enable local storage for
> certain workloads.
>
> Is it safe to enable this on an existing production cluster and am I
> correct in assuming that
> /var/lib/libvirt/images/ will be the path unless defined within
> agent.properties?
>
> Currently my agent.properties only has 'local.storage.uuid' defined and not
> 'local.storage.path'
>
> Thanks in advance.
> Ed
>


Re: Needed ACS billing solution

2022-01-23 Thread Ivan Kudryavtsev
Hi,

Take a look at Cyclops (https://cyclops-billing.readthedocs.io/en/latest/).
Actually, the integration with any existing billing solution can be done in
days.

On Sun, Jan 23, 2022 at 11:30 PM Saurabh Rapatwar 
wrote:

> I guess there is no open source solution but at reasonable price you can
> have the paid solution for billing, automated provisioning and support with
> Stack Console ( www.stackconsole.io )
>
> On Sun, 23 Jan, 2022, 10:58 am Technology rss, <
> technologyrss.m...@gmail.com>
> wrote:
>
> > *Hi all,*
> >
> > Please suggest to me which open source billing solution is the best for
> > ACS?
> >
> > Thanks. Regards.
> >
>


Re: CPU Cap

2022-01-20 Thread Ivan Kudryavtsev
Hi, look at CPU steal time percent. It works for KVM, at least. It should
go to 50% if you push your capped core to 100%.

чт, 20 янв. 2022 г., 17:42 Дикевич Евгений Александрович <
evgeniy.dikev...@becloud.by>:

> Hi all!
>
> ACS 4.16 + XCP-NG 8.2
>
> Maybe someone can me explain how CPU cap works?
> I created Compute Offering with 1000MHz per core. Enable "CPU cap" flag.
> All CPU in my clusters have more than 2000MHz.
>
> When I created Instance from this CO in VM I see that this VM use all CPU
> "speed".
> How I can correct enable CPU cap and Can It at all?
>
>
>
>
>
>
> Внимание!
> Это электронное письмо и все прикрепленные к нему файлы являются
> конфиденциальными и предназначены исключительно для использования лицом
> (лицами), которому (которым) оно предназначено. Если Вы не являетесь лицом
> (лицами), которому (которым) предназначено это письмо, не копируйте и не
> разглашайте его содержимое и удалите это сообщение и все вложения из Вашей
> почтовой системы. Любое несанкционированное использование, распространение,
> раскрытие, печать или копирование этого электронного письма и прикрепленных
> к нему файлов, кроме как лицом (лицами) которому (которым) они
> предназначены, является незаконным и запрещено. Принимая во внимание, что
> передача данных посредством Интернет не является безопасной, мы не несем
> никакой ответственности за любой потенциальный ущерб, причиненный в
> результате ошибок при передаче данных или этим сообщением и прикрепленными
> к нему файлами.
>
> Attention!
> This email and all attachments to it are confidential and are intended
> solely for use by the person (or persons) referred to (mentioned) as the
> intended recipient (recipients). If you are not the intended recipient of
> this email, do not copy or disclose its contents and delete the message and
> any attachments to it from your e-mail system. Any unauthorized use,
> dissemination, disclosure, printing or copying of this e-mail and files
> attached to it, except by the intended recipient, is illegal and is
> prohibited. Taking into account that data transmission via Internet is not
> secure, we assume no responsibility for any potential damage caused by data
> transmission errors or this message and the files attached to it.
>


Re: ACS with local disks

2021-12-21 Thread Ivan Kudryavtsev
Local storage gives the simplest design and the most predictable behavior.
The live migration is often overrated while host crashes are pretty rare.
We have servers with 500+ days of operation running Cloudstack. So... it's
just fine, at least with KVM.

вт, 21 дек. 2021 г., 17:37 Gabriel Bräscher :

> Hi Yordan,
>
> HA is definitely the biggest con, as Rohit mentioned.
> Adding to that, live migrating VMs around the cluster takes a LOT more time
> as well. For example, it takes "5 minutes" when live migrating VMs in
> shared storages; however, it can take hours when live migrating from local
> storages, depending on the VMs root disk size.
> It is important to consider the time needed for each VM migration, in case
> you need to offload a host for maintenance or balance the VMs workload
> across the cluster.
>
> Regards,
> Gabriel.
>
> On Tue, Dec 21, 2021 at 11:12 AM Rohit Yadav 
> wrote:
>
> > Hi Yordan,
> >
> > The biggest cons of using local storage is probably that you'll lose
> > high-availability, if the host goes down so does the storage.
> >
> >
> > Regards.
> >
> > 
> > From: Yordan Kostov 
> > Sent: Friday, December 17, 2021 16:17
> > To: users@cloudstack.apache.org 
> > Subject: ACS with local disks
> >
> > Hey everyone,
> >
> > I am exploring a design based on ACS + XCP-NG with nodes
> > that have local disks. Roughly around 50 nodes.
> > In this case local storage is just local - no SDS
> > solutions whatsoever.
> > Are there any cons that I should have in mind?
> >
> > Best regards,
> > Jordan
> >
> >
> >
> >
> >
>


Re: How to integrate HDFS to CloudStack

2021-10-20 Thread Ivan Kudryavtsev
Looks like a joke. HDFS is not an FS you WANT or CAN use for VM
filesystems.
Its architecture is completely different from what POSIX-compliant OS wants
to keep QCOW2 images (or even RAW images).

If you want fault-tolerant FS, use Ceph or Gluster or even NFS over DRBD.
NFS is a stateless protocol, failover is just fine with VRRP or another
shared IP option, but writeback options on the storage host can influence
of course, Ceph or Gluster is a way to go.


On Wed, Oct 20, 2021 at 10:53 PM  wrote:

> IS HDFS still a thing? :)
> I thought those things were horrible and slow.
>
> Anyway, you can in theory use it via a "shared mount point" type of
> primary storage. I hope you are on KVM or XCP-ng.
>
> Why not look at more modern stuff such as CEPH or at least Glusterfs?
>
> Regards
>
> On 2021-10-20 16:24, Ivson Borges wrote:
> > How to use HDFS as Primary storage?
> >
> > NFS has not fault-tolerant feature. I want to use HDFS for Primary
> > Storage in CS to garantee fault-tolerant. Its possible?
>


Re: AMD graphics PCI passthrough possible?

2021-10-01 Thread Ivan Kudryavtsev
Take a look at this mr:
https://github.com/apache/cloudstack/pull/3839/files

сб, 2 окт. 2021 г., 10:09 Ivan Kudryavtsev :

> Hi, it can be done with cloudstack agent hooks implemented in groovy, but
> takes some coding and design.
>
> пт, 1 окт. 2021 г., 22:04 James Steele :
>
>> Hi all,
>>
>> we have added some Ubuntu 20.04 hosts which have an AMD ATI Radeon Pro WX
>> 5100 fitted inside.
>> We would like to passthrough the Radeon PCI device to KVM guests.
>>
>> IOMMU has been setup correctly and the Radeon card is showing as a
>> VFIO-PCI device. lspci -k shows:
>>
>>
>> d9:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
>> Ellesmere [Radeon Pro WX 5100]
>> Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere
>> [Radeon Pro WX 5100]
>> Kernel driver in use: vfio-pci
>> Kernel modules: amdgpu
>> d9:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere
>> HDMI Audio [Radeon RX 470/480 / 570/580/590]
>> Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI
>> Audio [Radeon RX 470/480 / 570/580/590]
>> Kernel driver in use: vfio-pci
>> Kernel modules: snd_hda_intel
>>
>>
>> CloudStack has native NVIDIA support, where Compute Offerings can have
>> NVIDIA devices specified.
>> Is support for AMD coming? Has anyone managed to get AMD passthrough
>> working?
>>
>> Does anything need to be specified perhaps in:
>> /etc/cloudstack/agent/agent.properties ???
>>
>> Thanks, Jim
>>
>


Re: AMD graphics PCI passthrough possible?

2021-10-01 Thread Ivan Kudryavtsev
Hi, it can be done with cloudstack agent hooks implemented in groovy, but
takes some coding and design.

пт, 1 окт. 2021 г., 22:04 James Steele :

> Hi all,
>
> we have added some Ubuntu 20.04 hosts which have an AMD ATI Radeon Pro WX
> 5100 fitted inside.
> We would like to passthrough the Radeon PCI device to KVM guests.
>
> IOMMU has been setup correctly and the Radeon card is showing as a
> VFIO-PCI device. lspci -k shows:
>
>
> d9:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
> Ellesmere [Radeon Pro WX 5100]
> Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere
> [Radeon Pro WX 5100]
> Kernel driver in use: vfio-pci
> Kernel modules: amdgpu
> d9:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere
> HDMI Audio [Radeon RX 470/480 / 570/580/590]
> Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI
> Audio [Radeon RX 470/480 / 570/580/590]
> Kernel driver in use: vfio-pci
> Kernel modules: snd_hda_intel
>
>
> CloudStack has native NVIDIA support, where Compute Offerings can have
> NVIDIA devices specified.
> Is support for AMD coming? Has anyone managed to get AMD passthrough
> working?
>
> Does anything need to be specified perhaps in:
> /etc/cloudstack/agent/agent.properties ???
>
> Thanks, Jim
>


Re: Recommendation for Storage.

2021-09-23 Thread Ivan Kudryavtsev
Nfs over Ceph is probably a bad way to go. Ceph works natively with ACS,
better to put efforts finding why you have isdues with that.

Also, with Ceph short qd small objects write IO performance will be low
without proper fine tuning, writeback caching and pretty high amount of
OSDs... thats why I use simple local storage everywhere.

чт, 23 сент. 2021 г., 17:18 Mevludin Blazevic :

> Hi all,
>
> very interesting discussion here. I am facing the issue connecting my
> Ceph cluster with Cloudstack via the RBD protocol. It seems like there
> is either a documentation or software bug because we are running always
> in the same error (rbd pool not found). I was thinking about creating a
> NFS service on my Ceph cluster to connect it to Cloudstack because I
> know that adding an NFS server as primary storage works. My cluster is
> far smaller than yours but I am worry about performance and IOPS when
> using NFS service with Ceph.
>
> Mevludin
>
> Am 23.09.2021 um 12:12 schrieb Alex Mattioli:
> > I second what he said, I've ran ACS zones with 60+ hypervisors and 2,000
> VMs from one single pair of storage servers delivering , all on NFS and no
> issues at all.
> >
> > Just be sure to select the right vendor and size it correctly.
> >
> >
> >
> >
> > -Original Message-
> > From: Ivan Kudryavtsev 
> > Sent: 23 September 2021 11:02
> > To: users 
> > Subject: Re: Recommendation for Storage.
> >
> > Abishek,
> >
> > NFS over a bunch of drives works just fine but has no means for failover
> (out of the box, when self-built). If your benchmark shows enough IO
> performance per VM, then NFS is just the way to go.
> > Keep in mind that NFS can have various backing store technologies like
> NetApp appliances, Ceph, plain RAID volumes - it leads to different
> performance levels and reliability guarantees. As for accessor, NFS is OK.
> >
> > On Thu, Sep 23, 2021 at 3:56 PM Abishek  wrote:
> >
> >> Hello Every One,
> >>
> >> We are planning to go into cloud production with cloudstack 4.15 and
> >> KVM host. We are currently considering nfs as storage because of the
> >> performance. Is it feasible to use NFS as primary storage type in
> >> production environment. Will there be any bottleneck of any offsets in
> >> future(if any one has deployed nfs as storage in production). Shall I
> >> prefer iscsi with NFS or any other storage type above NFS for
> >> production environment.
> >>
> >> Thank You.
> >>
> --
> Mevludin Blazevic
>
> University of Koblenz-Landau
> Computing Centre (GHRKO)
> Universitaetsstrasse 1
> D-56070 Koblenz, Germany
> Room A023
>
>


Re: Recommendation for Storage.

2021-09-23 Thread Ivan Kudryavtsev
Abishek,

NFS over a bunch of drives works just fine but has no means for failover
(out of the box, when self-built). If your benchmark shows enough IO
performance per VM, then NFS is just the way to go.
Keep in mind that NFS can have various backing store technologies like
NetApp appliances, Ceph, plain RAID volumes - it leads to different
performance levels and reliability guarantees. As for accessor, NFS is OK.

On Thu, Sep 23, 2021 at 3:56 PM Abishek  wrote:

> Hello Every One,
>
> We are planning to go into cloud production with cloudstack 4.15 and KVM
> host. We are currently considering nfs as storage because of the
> performance. Is it feasible to use NFS as primary storage type in
> production environment. Will there be any bottleneck of any offsets in
> future(if any one has deployed nfs as storage in production). Shall I
> prefer iscsi with NFS or any other storage type above NFS for production
> environment.
>
> Thank You.
>


Re: Groovy Script error while changing guest cpu model.

2021-09-13 Thread Ivan Kudryavtsev
guest.cpu.mode=host-passthrough

should work like a charm if it doesn't work, this means the regression is
in the CloudStack code and it prevents setting the property.
Make sure you have set the correct line end character in UNIX format.

On Mon, Sep 13, 2021 at 7:00 PM Wei ZHOU  wrote:

> Hi Abishek,
>
> It is better to share your agent.properties
>
> -Wei
>
> On Mon, 13 Sept 2021 at 13:39, avi  wrote:
>
> > Hello Ivan,
> >
> > But I am still getting qemu virtual cpu in guest VM's(Windows). Did
> > everything as documented. I want the VM's to have same CPU as the host
> > machines Will it be possible?
> >
> > Thank You.
> >
> > On 2021/09/13 07:55:39, Ivan Kudryavtsev  wrote:
> > > That is just fine. Go ahead, it's not an error.
> > >
> > > On Mon, Sep 13, 2021 at 2:24 PM avi  wrote:
> > >
> > > > Hello All,
> > > >
> > > > I am using cloudstack 4.15.1 with KVM host. I was playing with
> changing
> > > > guest cpu model and tested out
> > > > host-passthrough and host-model but I was unable to succed. I changed
> > the
> > > > parameter in the agent config file as documented but I received
> > following
> > > > error on both hosts:
> > > >  12:07:43,928 INFO  [kvm.storage.LibvirtStorageAdaptor]
> > > > (agentRequest-Handler-5:null) (logid:a59f3a95) Trying to fetch
> storage
> > pool
> > > > 0420bb0c-6e77-3a53-994a-8907905cd465 from libvirt
> > > >  12:07:44,092 INFO  [kvm.storage.LibvirtStorageAdaptor]
> > > > (agentRequest-Handler-5:null) (logid:a59f3a95) Trying to fetch
> storage
> > pool
> > > > 0420bb0c-6e77-3a53-994a-8907905cd465 from libvirt
> > > >  12:07:44,309 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy script
> > > > '/etc/cloudstack/agent/hooks/libvirt-vm-xml-transformer.groovy' is
> not
> > > > available. Transformations will not be applied.
> > > >  12:07:44,309 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy scripting
> engine
> > is
> > > > not initialized. Data transformation skipped.
> > > >  12:07:44,800 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy script
> > > > '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> > > > available. Transformations will not be applied.
> > > >  12:07:44,801 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy scripting
> engine
> > is
> > > > not initialized. Data transformation skipped.
> > > >
> > > > I only changed the option guest.cpu.mode in the agent file of both
> > host.
> > > > And restarted the agent and libvirtd. The host os is Centos 7. The
> > machine
> > > > starts sucessfully but the cpu is still set to qemu virtual cpu. Did
> I
> > miss
> > > > something during the configuration. Both the KVM host are of same
> > > > specification.
> > > > I will be grateful for any help.
> > > >
> > > > Thank You.
> > > >
> > > >
> > >
> >
>


Re: Groovy Script error while changing guest cpu model.

2021-09-13 Thread Ivan Kudryavtsev
Hi, this means that you used wrong cpu specification in the config. These
hooks do nothing about that, especially because they are not present.

пн, 13 сент. 2021 г., 18:40 avi :

> Hello Ivan,
>
> But I am still getting qemu virtual cpu in guest VM's(Windows). Did
> everything as documented. I want the VM's to have same CPU as the host
> machines Will it be possible?
>
> Thank You.
>
> On 2021/09/13 07:55:39, Ivan Kudryavtsev  wrote:
> > That is just fine. Go ahead, it's not an error.
> >
> > On Mon, Sep 13, 2021 at 2:24 PM avi  wrote:
> >
> > > Hello All,
> > >
> > > I am using cloudstack 4.15.1 with KVM host. I was playing with changing
> > > guest cpu model and tested out
> > > host-passthrough and host-model but I was unable to succed. I changed
> the
> > > parameter in the agent config file as documented but I received
> following
> > > error on both hosts:
> > >  12:07:43,928 INFO  [kvm.storage.LibvirtStorageAdaptor]
> > > (agentRequest-Handler-5:null) (logid:a59f3a95) Trying to fetch storage
> pool
> > > 0420bb0c-6e77-3a53-994a-8907905cd465 from libvirt
> > >  12:07:44,092 INFO  [kvm.storage.LibvirtStorageAdaptor]
> > > (agentRequest-Handler-5:null) (logid:a59f3a95) Trying to fetch storage
> pool
> > > 0420bb0c-6e77-3a53-994a-8907905cd465 from libvirt
> > >  12:07:44,309 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy script
> > > '/etc/cloudstack/agent/hooks/libvirt-vm-xml-transformer.groovy' is not
> > > available. Transformations will not be applied.
> > >  12:07:44,309 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy scripting engine
> is
> > > not initialized. Data transformation skipped.
> > >  12:07:44,800 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy script
> > > '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> > > available. Transformations will not be applied.
> > >  12:07:44,801 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy scripting engine
> is
> > > not initialized. Data transformation skipped.
> > >
> > > I only changed the option guest.cpu.mode in the agent file of both
> host.
> > > And restarted the agent and libvirtd. The host os is Centos 7. The
> machine
> > > starts sucessfully but the cpu is still set to qemu virtual cpu. Did I
> miss
> > > something during the configuration. Both the KVM host are of same
> > > specification.
> > > I will be grateful for any help.
> > >
> > > Thank You.
> > >
> > >
> >
>


Re: Groovy Script error while changing guest cpu model.

2021-09-13 Thread Ivan Kudryavtsev
That is just fine. Go ahead, it's not an error.

On Mon, Sep 13, 2021 at 2:24 PM avi  wrote:

> Hello All,
>
> I am using cloudstack 4.15.1 with KVM host. I was playing with changing
> guest cpu model and tested out
> host-passthrough and host-model but I was unable to succed. I changed the
> parameter in the agent config file as documented but I received following
> error on both hosts:
>  12:07:43,928 INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-5:null) (logid:a59f3a95) Trying to fetch storage pool
> 0420bb0c-6e77-3a53-994a-8907905cd465 from libvirt
>  12:07:44,092 INFO  [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-5:null) (logid:a59f3a95) Trying to fetch storage pool
> 0420bb0c-6e77-3a53-994a-8907905cd465 from libvirt
>  12:07:44,309 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy script
> '/etc/cloudstack/agent/hooks/libvirt-vm-xml-transformer.groovy' is not
> available. Transformations will not be applied.
>  12:07:44,309 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy scripting engine is
> not initialized. Data transformation skipped.
>  12:07:44,800 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy script
> '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> available. Transformations will not be applied.
>  12:07:44,801 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:a59f3a95) Groovy scripting engine is
> not initialized. Data transformation skipped.
>
> I only changed the option guest.cpu.mode in the agent file of both host.
> And restarted the agent and libvirtd. The host os is Centos 7. The machine
> starts sucessfully but the cpu is still set to qemu virtual cpu. Did I miss
> something during the configuration. Both the KVM host are of same
> specification.
> I will be grateful for any help.
>
> Thank You.
>
>


Re: GlusterFS consideration KVM.

2021-09-03 Thread Ivan Kudryavtsev
Glusterfs works fine as a shared mountpoint. No NFS and other stuff
required. Just mount them everywhere and you are good to go. Performance is
acceptable (at least for bunch of ssd), but not comparable with local
storage, of course. Not recommend for IO intensive. VM. We recommend such
VMs for HA, like routers, RAM/CPY intensive workloads.

Cheers

сб, 4 сент. 2021 г., 05:01 Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com>:

> Hi Abishek,
>
> We are testing CS 4.15.1 with Gluster at this moment with
> distributed-replicated configuration, NFS-ganesha as storage service
> protocol and ZFS 2.1 (raidz-1). At this moment we are trying different
> configurations and we cannot get a really good performance.
>
> If somebody in this group can contribute with information we'll
> appreciate your help too.
>
> El 2/9/2021 a las 01:03, Abishek escribió:
> > Hell All,
> >
> > I have been testing cloudstack 4.15.1 from past few weeks and its going
> well. For further testing into our environment I am planning to test out
> glusterfs with KVM host (the servers have only local storage). Will
> glusterfs have any performance downside. Did anyone previously had the
> setup with gluster(replicated). Any thing to consider while deploying?
> >
> > I will be very grateful for any kind of recommendation.
> > Thank you.
> >
>


Re: Unable to start a VM due to insufficient capacity

2021-09-02 Thread Ivan Kudryavtsev
Hi, the actual error is earlier (above) the mentioned log part. Please
provide about 100 lines before.

On Thu, Sep 2, 2021 at 2:55 PM technologyrss.mail <
technologyrss.m...@gmail.com> wrote:

> *Hi,*
>
> I setup Advanced zone using *ACS v4.15.1* but can't instance properly. I
> attached ACS log file with my system dashboard as like below image.
>
> For small environment for testing purposes. All servers are centos 7.8
> with separate NFS storage.
>
> *This is ACS log.*
>
>
> *This is web browser error when I create instance.*
>
>
> *This is my server system capacity,*
>
>
> Please give me some idea what is issue for my system capacity?
>
>
> *---*
> *alamin*
>
>
>


Re: How to add additional local storage pools on KVM

2021-08-25 Thread Ivan Kudryavtsev
Indeed, shared mount point storage works best, but still ugly by its
workaround nature)

ср, 25 авг. 2021 г., 18:37 Michael Brußk :

> Hi Ivan,
>
> well, we have already aligned internal, that we will implement handling
> for additional local storage pools (not now, but in near future).
> We have also already thought about a very similar workaround as you have
> described, but with SharedMountPoint (instead of NFS) in
> Single-Node-Clusters ... those the storage-pool would be direct used as
> local filesystem/folder by the host ^^
>
> regards,
> Michael
>
> 25. August 2021 11:05, "Ivan Kudryavtsev"  >
> schrieb:
>
> Hi, you cannot have multiple local storages for a single host (however it
> would be great if it is supported, but not yet).
> There is one real-life workaround basically:
> 1. you add a single host to a single cluster (e.g. C1)
> 2. you export storage pools from the host as NFS filesystems with scope
> "Cluster" to the same cluster (e.g. C1)
> 3. tag them appropriately like SSD, HDD, etc.
> it works, but it's ugly and it has overhead which is introduced by NFS.
> Multiple local storage pools would be great to have...
> On Wed, Aug 25, 2021 at 3:43 PM Michael Brußk  wrote:
>
> Hi Davide,
>
> thanks for replay.
> Is there some documentation available about this setting?
> When should this be set - before or after the host has been added to CS?
> If before, how to make sure this will not be overwritten by the agent
> setup process?
>
> > some of the hosts could mount the ssd and the rest the hdd and assign
> tags to the host and the service offerings.
> This is not possible, since each host has one local SSD- and one HDD-raid,
> those it can't be mounted from/to other hosts (maybe via NFS, but this
> would result in a weird design)
> Regards,
> Michael
> -Ursprüngliche Nachricht-
> Von: David Jumani 
> Gesendet: Mittwoch, 25. August 2021 09:01
> An: users@cloudstack.apache.org
> Betreff: Re: How to add additional local storage pools on KVM
>
> Hi,
>
> You can specify the path to a local directory on the host (which can be
> mounted) in the agent.properties file
>
> local.storage.path=/mnt/path
>
> As for multiple storage locations on a single host, I'm not sure whether
> it is supported As a workaround, some of the hosts could mount the ssd and
> the rest the hdd and assign tags to the host and the service offerings.
>
> CloudStack will match those tags and bring up the VM on the appropriate
> host 
>
> From: Michael Bru?k mailto:m...@mib85.de)>
>
> Sent: Tuesday, August 24, 2021 7:55 PM
>
> To: users@cloudstack.apache.org (mailto:users@cloudstack.apache.org) <
> users@cloudstack.apache.org (mailto:users@cloudstack.apache.org)>
>
> Subject: How to add additional local storage pools on KVM
>
> Hi,
>
> how is it possible to add additional local storage pools to KVM hosts?
>
> As per documentation (and observations) when adding a new KVM host to a
> zone, where "use local storage for client vms" is enabled, CS automatically
> creates a new filesystem based local storage pool under
> /var/lib/libvirt/images.
>
> We would like to use two additional local storages for different purposes
> (ssd-raid for realtime/hot data and hdd-raid for archive data) on each host
> (cluster with upto 8 hosts).
>
> We are aware of that live migrations are not functional when using local
> storage, but we want to use already existing CloudStack installation (and
> all our processes around) for this special project.
>
> Regards,
>
> Michael
>
>
>
>


Re: How to add additional local storage pools on KVM

2021-08-25 Thread Ivan Kudryavtsev
Hi, you cannot have multiple local storages for a single host (however it
would be great if it is supported, but not yet).

There is one real-life workaround basically:

1. you add a single host to a single cluster (e.g. C1)
2. you export storage pools from the host as NFS filesystems with scope
"Cluster" to the same cluster (e.g. C1)
3. tag them appropriately like SSD, HDD, etc.

it works, but it's ugly and it has overhead which is introduced by NFS.
Multiple local storage pools would be great to have...




On Wed, Aug 25, 2021 at 3:43 PM Michael Brußk  wrote:

> Hi Davide,
>
> thanks for replay.
> Is there some documentation available about this setting?
> When should this be set - before or after the host has been added to CS?
> If before, how to make sure this will not be overwritten by the agent
> setup process?
>
> > some of the hosts could mount the ssd and the rest the hdd and assign
> tags to the host and the service offerings.
> This is not possible, since each host has one local SSD- and one HDD-raid,
> those it can't be mounted from/to other hosts (maybe via NFS, but this
> would result in a weird design)
> Regards,
> Michael
> -Ursprüngliche Nachricht-
> Von: David Jumani 
> Gesendet: Mittwoch, 25. August 2021 09:01
> An: users@cloudstack.apache.org
> Betreff: Re: How to add additional local storage pools on KVM
>
> Hi,
>
> You can specify the path to a local directory on the host (which
> can be mounted) in the agent.properties file
>
> local.storage.path=/mnt/path
>
> As for multiple storage locations on a single host, I'm not sure
> whether it is supported As a workaround, some of the hosts could mount the
> ssd and the rest the hdd and assign tags to the host and the service
> offerings.
>
> CloudStack will match those tags and bring up the VM on the
> appropriate host 
>
> From: Michael Bru?k mailto:m...@mib85.de)>
>
> Sent: Tuesday, August 24, 2021 7:55 PM
>
> To: users@cloudstack.apache.org (mailto:
> users@cloudstack.apache.org)  users@cloudstack.apache.org)>
>
> Subject: How to add additional local storage pools on KVM
>
> Hi,
>
> how is it possible to add additional local storage pools to KVM
> hosts?
>
> As per documentation (and observations) when adding a new KVM host
> to a zone, where "use local storage for client vms" is enabled, CS
> automatically creates a new filesystem based local storage pool under
> /var/lib/libvirt/images.
>
> We would like to use two additional local storages for different
> purposes (ssd-raid for realtime/hot data and hdd-raid for archive data) on
> each host (cluster with upto 8 hosts).
>
> We are aware of that live migrations are not functional when using
> local storage, but we want to use already existing CloudStack installation
> (and all our processes around) for this special project.
>
> Regards,
>
> Michael
>


Re: Auto Start Instances

2021-05-30 Thread Ivan Kudryavtsev
Hi. Just set 'HA Enabled' for service offerings, which are used for vm with
auto launch capability.

вс, 30 мая 2021 г., 19:21 Bs Serge :

> Hello, good community,
>
> Is there a way to automatically start instances when the host server is UP?
>
> The SystemVMs are UP automatically!
>
> Centos8
> Cloudstack: 4.15
> Hypervisor: KVM
>
> Kind Regards,
>


Re: Change IP address for ConsoleProxy and SecondaryStorageVM IP

2021-01-03 Thread Ivan Kudryavtsev
Well, it could be like that, other way is only to fix it thru db, but it's
not a supported way.

вс, 3 янв. 2021 г., 15:55 Hean Seng :

> i tried destry it. but when  re-built back, it getting the same IP
>
> On Sun, Jan 3, 2021 at 4:46 PM Ivan Kudryavtsev  wrote:
>
> > Hi, just destroy them.
> >
> > вс, 3 янв. 2021 г., 14:12 Hean Seng :
> >
> > > Hi
> > >
> > >
> > > Is there any way to change IP for ConsoleProxy or SecondaryVM ?
> > >
> > > In the SystemVm Detail Page, I did not see any place to Change IP .
> > >
> > >
> > > --
> > > Regards,
> > > Hean Seng
> > >
> >
>
>
> --
> Regards,
> Hean Seng
>


Re: Change IP address for ConsoleProxy and SecondaryStorageVM IP

2021-01-03 Thread Ivan Kudryavtsev
Hi, just destroy them.

вс, 3 янв. 2021 г., 14:12 Hean Seng :

> Hi
>
>
> Is there any way to change IP for ConsoleProxy or SecondaryVM ?
>
> In the SystemVm Detail Page, I did not see any place to Change IP .
>
>
> --
> Regards,
> Hean Seng
>


Re: Brute force SSH trojan

2020-11-22 Thread Ivan Kudryavtsev
It must be configured upon the first boot, or as you have said,
preconfigured. Our templates set password upon the first boot.

пн, 23 нояб. 2020 г., 14:20 :

> Hi Ivan.
>
> I can imagine: If the template has the hability to re-set password, that
> means, that there should not be any password pre-assigned, right?
>
> Which piece of code is responsible for password/key reset, is it
> cloud-init? or is there any other involved part.
>
> I will try to workout a fix and report to the template owner.
>
> Regards,
> Rafael
>
> On Mon, 2020-11-23 12:32 AM, Ivan Kudryavtsev  wrote:
> > Hi. It looks like an improperly crafted template, not a ACS issue.
> >
> > пн, 23 нояб. 2020 г., 02:18 Rafael del Valle "
> target="_blank">:
> >
> > > Hi Hean,
> > >
> > > Mystery solved.
> > >
> > > The template comes with Password Enabled in SSH server. And debian user
> > > has a default password: "password".
> > >
> > > Assigning the SSH key only added the key, without disabling any other
> > > thing.
> > >
> > > Regards,
> > > Rafael
> > >
> > >
> > >
> > >
> > > On Sun, 2020-11-22 03:38 PM, Hean Seng " target="_blank"><
> heans...@gmail.com> wrote:
> > > > Hi
> > > >
> > > > You did not change the password, and all using the default password ?
> > > >
> > > > On Sun, Nov 22, 2020 at 4:59 PM "
> > > target="_blank">" target="_blank"> wrote:
> > > >
> > > > > ​Hi Community!
> > > > >
> > > > > Congratulations to the new committers.
> > > > >
> > > > > One VM in a test environment was infected by a brute force SSH
> trojan.
> > > > >
> > > > > The OS is debian-9 , the template from openvm.eu
> > > > >
> > > > > It had only SSH (22) and iperf (5001) services running and
> reachable
> > > from
> > > > > anywhere.
> > > > >
> > > > > I believe this article is related because of the tar file
> > > (dota3.tar.gz)
> > > > > that I found on the system:
> > > > > ​
> > > > >
> > > > >
> > >
> https://ethicaldebuggers.com/outlaw-botnet-affects-more-than-2-linux-servers/
> > > > > ​
> > > > > I have a snapshot of the ROOT volume in case anybody is interested
> to
> > > > > review it.
> > > > >
> > > > > I suspect they got in via SSH, but I wonder how as only one KEY was
> > > setup
> > > > > (no password). I am trying to find out more information.
> > > > >
> > > > > Has anybody experienced this ?
> > > > >
> > > > > Regards,
> > > > > Rafael
> > > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Hean Seng
> > > >
> >


Re: Brute force SSH trojan

2020-11-22 Thread Ivan Kudryavtsev
Hi. It looks like an improperly crafted template, not a ACS issue.

пн, 23 нояб. 2020 г., 02:18 Rafael del Valle :

> Hi Hean,
>
> Mystery solved.
>
> The template comes with Password Enabled in SSH server. And debian user
> has a default password: "password".
>
> Assigning the SSH key only added the key, without disabling any other
> thing.
>
> Regards,
> Rafael
>
>
>
>
> On Sun, 2020-11-22 03:38 PM, Hean Seng  wrote:
> > Hi
> >
> > You did not change the password, and all using the default password ?
> >
> > On Sun, Nov 22, 2020 at 4:59 PM "
> target="_blank"> wrote:
> >
> > > ​Hi Community!
> > >
> > > Congratulations to the new committers.
> > >
> > > One VM in a test environment was infected by a brute force SSH trojan.
> > >
> > > The OS is debian-9 , the template from openvm.eu
> > >
> > > It had only SSH (22) and iperf (5001) services running and reachable
> from
> > > anywhere.
> > >
> > > I believe this article is related because of the tar file
> (dota3.tar.gz)
> > > that I found on the system:
> > > ​
> > >
> > >
> https://ethicaldebuggers.com/outlaw-botnet-affects-more-than-2-linux-servers/
> > > ​
> > > I have a snapshot of the ROOT volume in case anybody is interested to
> > > review it.
> > >
> > > I suspect they got in via SSH, but I wonder how as only one KEY was
> setup
> > > (no password). I am trying to find out more information.
> > >
> > > Has anybody experienced this ?
> > >
> > > Regards,
> > > Rafael
> > >
> >
> >
> > --
> > Regards,
> > Hean Seng
> >


Re: Does CloudStack support PXC?

2020-11-12 Thread Ivan Kudryavtsev
Hi, Cloudstack heavily relies on lock/unlock functionality.
Galera cluster is fine, but as Andrija says, there is a single read/write
node for all cs management must be used.

чт, 12 нояб. 2020 г., 20:31 Andrija Panic :

> As long as you use a single node for writes and reads - yes.
> Some users have used mysql proxy to send writes to node1, and reads to
> node1/2/3 and eventually, I believe they had some issues (but that was not
> confirmed to be the root cause)
>
> You can also use haproxy in front of percona cluster (alwaus a single node,
> and 2 "backup" nodes), so failover is seamless (like in case of mysql
> proxy), but we warned that IN PAST there was some stupidly long (not
> closed) mysql sessions/connection that would cause ACS to throw exception
> during i.e. long lasting volume snapshots (DB Connection was keept open
> during the backing up of the snap to Sec Stor, and at some time haproxy
> would close the connection adn ACS would crash) - but this was fixed in 4,9
> or so - so should be a safe bet these days.
>
> Make sure to test extensively
> Nothing to convert really, dump the DBs properly, import into PXC.
>
> Best,
>
> On Wed, 11 Nov 2020 at 14:52, li jerry  wrote:
>
> > Hi, dear cloudstack users
> > I see that PXC (percona xtradb cluster) has excellent functions.
> >
> > But it has the following limitations:
> > Only InnoDB storage engine is supported
> > Lock / unlock tables is not supported
> > Lock function (get) is not supported_ LOCK(), RELEASE_ LOCK()
> >
> >
> > Does cloudstack use the above functions?
> > Can I convert the MySQL used by cloudstack to PXC?
> >
> > Has anyone used PXC in cloudstack?
> >
> > -Jerry
> >
> >
>
> --
>
> Andrija Panić
>


Re: NFS Support, pNFS 4.1 , and 4.2

2020-10-14 Thread Ivan Kudryavtsev
Hi Hean,

I've never tried pNFS, but the problem is the same. If you want failover
and hyper scaling, then use Gluster or Ceph. Why would you use PNFS which
is used by almost nobody?
People use NFS because:

1. it's primitive
2. it's easy to manage
3. it supports migrations
4. if planned well (cluster-wide) you can limit fail domain.
5. it is rock solid If deployed properly (i had my NFS 600+ days of uptime).

If you want fault-tolerance, use Gluster, Ceph, or proprietary, don't
reinvent the wheel.

I don't use NFS because my goal is to limit the failure domain to a single
host, so I use local storage. Every server is packed with SSD RAID or NVME
RAID, for me offline migrations are just fine. A bunch of my users wants
fault tolerance, so they use pretty tiny Gluster (1TB).

You will never get great IOPS on any parallel, clustered FS or storage.
Want best performance - use NFS or local, want failover use Gluster (shared
mount point) or Ceph (natively supported).

--
Ivan




On Wed, Oct 14, 2020 at 11:59 PM Hean Seng  wrote:

> HI
>
> Since the most of user using NFS for cloudstack, can I ask if cloudstack
> NFS mount can support which version of NFS.
>
> I just a test and seems is v4. and there is no way to  on 4.1 which
> support PNFS etc .
>
> Anybody can advice on this ?
>
> Thanks
>
>
> --
> Regards,
> Hean Seng
>


Re: Cloudstack - What Storage you using ?

2020-10-13 Thread Ivan Kudryavtsev
Hi, hypervisor restrictions configured for SO allows limiting iops, bps for
NFS as well as for another storage, because they are enforced by qemu.

ср, 14 окт. 2020 г., 01:53 Hean Seng :

> Hi
>
> Do anybody know NFS implementation of Primary storage can support QOS for
> IOPs in Services Offering ?  I
>
> On Mon, Oct 12, 2020 at 8:20 PM Pratik Chandrakar <
> chandrakarpra...@gmail.com> wrote:
>
> > I was asking for storage layer instead of VM.
> >
> > On Mon, Oct 12, 2020 at 12:36 PM Hean Seng  wrote:
> >
> > > Local Disk is not possible for HA .
> > >
> > > If you can accept NFS, then HA is not an issue .
> > >
> > > On Mon, Oct 12, 2020 at 2:42 PM Pratik Chandrakar <
> > > chandrakarpra...@gmail.com> wrote:
> > >
> > > > Hi Andrija,
> > > > I have a similar requirement like Hean. So what's your recommendation
> > for
> > > > HA with NFS/Local disk?
> > > >
> > > >
> > > > On Sat, Oct 10, 2020 at 8:55 AM Hean Seng 
> wrote:
> > > >
> > > > > Hi Andrija
> > > > >
> > > > > I am planning on a high end hypervisor ,  AMD EYPCv2 7742 CPU that
> > get
> > > > > 64core and 128thread ,   384G RAM, etc , and multiple 10G card
> bnond
> > or
> > > > 40G
> > > > > card for storage network.
> > > > >
> > > > > On this kind of server, probably get up to 200 VM per hypervisor.
> >  I'm
> > > > > just afraid that NFS will create a bottleneck if the storage server
> > is
> > > > > running a  lower-end  Hardware on storage.
> > > > >
> > > > > For ISCSI, normally won't be an issue of hardware cpu in storage
> > server
> > > > and
> > > > > it act almost like external hard disk, while NFS needs to process
> the
> > > > file
> > > > > system in Storage.
> > > > >
> > > > > I had read through  many articles, and mentioned GFS2 has many
> > > issues.  I
> > > > > initially planned to run OCFS2, but it does not support REDHAT any
> > > more,
> > > > > and there is a bug on Ubuntu18 , not sure if solved.  OCFS2 should
> > be a
> > > > lot
> > > > > more stable and less issue compare GFS2
> > > > >
> > > > > this is ocfs2 on ubuntu bug, which i am facing exactly the same.
> > > > >
> https://bugs.launchpad.net/ubuntu/+source/linux-signed/+bug/1895010
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Oct 9, 2020 at 6:41 PM Andrija Panic <
> > andrija.pa...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > free advice - try to avoid Clustered File Systems always - due to
> > > > > > complexity, and sometimes due to the utter lack of reliability (I
> > > had,
> > > > > > outside of ACS, an awful experience with GFS2, set by RedHat
> > themself
> > > > > for a
> > > > > > former customer), etc - so Shared Mount point is to be skipped,
> if
> > > > > > possible.
> > > > > >
> > > > > > Local disks - there are some downsides to VM live migration - so
> > make
> > > > > sure
> > > > > > to understand the limits and options.
> > > > > > iSCSI = same LUN attached to all KVM hosts = you again need
> > Clustered
> > > > > File
> > > > > > System, and that will be, again, consumed as Shared Mount point.
> > > > > >
> > > > > > For NFS, you are on your own when it comes to the performance and
> > > > > tunning -
> > > > > > this is outside of ACS - usually no high CPU usage on a
> moderately
> > > used
> > > > > NFS
> > > > > > server.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > On Thu, 8 Oct 2020 at 18:45, Hean Seng 
> wrote:
> > > > > >
> > > > > > > For using NFS, do you have performance issue like  Storage CPU
> > > > getting
> > > > > > very
> > > > > > > high ?   And i believe this could be cause the the filesystem
> is
> > > > make
> > > > > at
> > > > > > > Storage instead of Compute Node.
> > > > > > >
> > > > > > > Thus i am thinking of is ISCSI or LocalStorage.
> > > > > > >
> > > > > > > For ISCSI, i prefer if can running on LVM , which i believe
> > > > performance
> > > > > > > shall be the best , compared localstroage where file-based.
> > > > > > >
> > > > > > > But facing issue of ISCSI is ShareMount point need  Clustered
> > File
> > > > > > System,
> > > > > > > otherwise you can only setup one Cluster one Host.Setting
> up
> > > > > Cluster
> > > > > > > File system is issue here,   GFS2 is no more support on CentOS
> /
> > > > > Redhat,
> > > > > > > and there is bug in Ubuntu 18.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Oct 8, 2020 at 6:54 PM Andrija Panic <
> > > > andrija.pa...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > NFS is the rock-solid, and majority of users are using NFS, I
> > can
> > > > > tell
> > > > > > > that
> > > > > > > > for sure.
> > > > > > > > Do understand there is some difference between cheap
> white-box
> > > NFS
> > > > > > > solution
> > > > > > > > and a proprietary $$$ NFS solution, when it comes to
> > performance.
> > > > > > > >
> > > > > > > > Some users will use Ceph, some local disks (this is all KVM
> so
> > > far)
> > > > > > > > VMware users might be heavy on iSCSI 

Re: Cloudstack Advance with Security Group

2020-09-28 Thread Ivan Kudryavtsev
This is it.

On Mon, Sep 28, 2020 at 3:46 PM Hean Seng  wrote:

> In the log, I cannot see anything much,  except a few lines showing above
> .  Not sure if this is a bug on 4.14.
>
>
>
> 2020-09-28 03:53:36,585 ERROR [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-4:null) (logid:b6a4c077) Unable to apply default
> network rule for nic cloudbr0 for VM i-2-81-VM
>
> 2020-09-28 03:53:36,858 ERROR [kvm.resource.LibvirtComputingResource]
> (agentRequest-Handler-3:null) (logid:74058678) Unable to apply default
> network rule for nic cloudbr0 for VM i-2-81-VM
>
> 2020-09-28 03:53:36,858 WARN
> [resource.wrapper.LibvirtSecurityGroupRulesCommandWrapper]
> (agentRequest-Handler-3:null) (logid:74058678) Failed to program default
> network rules for vm i-2-81-VM
>
>
>
>
> On Mon, Sep 28, 2020 at 4:34 PM Ivan Kudryavtsev  wrote:
>
> > Hi,
> > no I'm on 4.11, so can not help with exact 4.14, and I'm on Ubuntu,
> though,
> > but for any KVM hypervisor Linux distribution, the logic is the same.
> >
> > On Mon, Sep 28, 2020 at 3:31 PM Hean Seng  wrote:
> >
> > > Hi
> > >
> > > Are you running on CentOS7 ?
> > >
> > > I am running on CentOS7 ,  ACS 4.14 ,  and  seem there is no log at of
> > > security_group.log
> > >
> > > # ls /var/log/cloudstack/agent/
> > >
> > > agent.log  resizevolume.log  setup.log
> > >
> > >
> > > I recheck back the Intall guide, seems no missing anything.
> > >
> > >
> > > Older intallation guide, 4.11 mentioned need , allow
> > > /usr/lib/sysctl.d/00-system.conf
> > >
> > > # Enable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 1
> > > net.bridge.bridge-nf-call-iptables = 1
> > net.bridge.bridge-nf-call-arptables
> > > = 1
> > >
> > > And it has been done too.
> > >
> > >
> > >
> > > On Mon, Sep 28, 2020 at 4:05 PM Ivan Kudryavtsev 
> wrote:
> > >
> > > > Hi,
> > > >
> > > > No, this is not the issue.
> > > > It's a normal state of the system, as KVM hooks are a new and
> optional
> > > > feature of 4.14.
> > > >
> > > > You should find some sort of messages regarding security_groups at
> > > > /var/log/cloudstack/agent/security_group.log
> > > >
> > > >
> > > > On Mon, Sep 28, 2020 at 2:10 PM Hean Seng 
> wrote:
> > > >
> > > > > I not sure where goes wrong,  are you running on CentOS 7 ? I have
> > this
> > > > > error too, do you think is this contribute to the error as well:
> > > > >
> > > > > 2020-09-28 03:04:52,762 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > > (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> > > > > '/etc/cloudstack/agent/hooks/libvirt-vm-xml-transformer.groovy' is
> > not
> > > > > available. Transformations will not be applied.
> > > > >
> > > > > 2020-09-28 03:04:52,762 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > > (agentRequest-Handler-5:null) (logid:4f23845b) Groovy scripting
> > engine
> > > is
> > > > > not initialized. Data transformation skipped.
> > > > >
> > > > > 2020-09-28 03:04:53,083 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > > > (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> > > > > '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> > > > > available. Transformations will not be applied.
> > > > >
> > > > > On Mon, Sep 28, 2020 at 2:27 PM Ivan Kudryavtsev 
> > > wrote:
> > > > >
> > > > > > This just means you installed it in the wrong way. Ebtables and
> > > > Iptables
> > > > > > must be filled with rules like
> > > > > >
> > > > > > -A i-6242-10304-def -m state --state RELATED,ESTABLISHED -j
> ACCEPT
> > > > > > -A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
> > > > > > --physdev-is-bridged -m udp --sport 68 --dport 67 -j ACCEPT
> > > > > > -A i-6242-10304-def -p udp -m physdev --physdev-out vnet18
> > > > > > --physdev-is-bridged -m udp --sport 67 --dport 68 -j ACCEPT
> > > > > > -A i-6242-10304-def -m physdev --physdev-in vnet18
> > > --physdev-is-bridged
> > > > > -m
> > > > > > set ! --match-set i-6242-

Re: Cloudstack Advance with Security Group

2020-09-28 Thread Ivan Kudryavtsev
Hi,
no I'm on 4.11, so can not help with exact 4.14, and I'm on Ubuntu, though,
but for any KVM hypervisor Linux distribution, the logic is the same.

On Mon, Sep 28, 2020 at 3:31 PM Hean Seng  wrote:

> Hi
>
> Are you running on CentOS7 ?
>
> I am running on CentOS7 ,  ACS 4.14 ,  and  seem there is no log at of
> security_group.log
>
> # ls /var/log/cloudstack/agent/
>
> agent.log  resizevolume.log  setup.log
>
>
> I recheck back the Intall guide, seems no missing anything.
>
>
> Older intallation guide, 4.11 mentioned need , allow
> /usr/lib/sysctl.d/00-system.conf
>
> # Enable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables
> = 1
>
> And it has been done too.
>
>
>
> On Mon, Sep 28, 2020 at 4:05 PM Ivan Kudryavtsev  wrote:
>
> > Hi,
> >
> > No, this is not the issue.
> > It's a normal state of the system, as KVM hooks are a new and optional
> > feature of 4.14.
> >
> > You should find some sort of messages regarding security_groups at
> > /var/log/cloudstack/agent/security_group.log
> >
> >
> > On Mon, Sep 28, 2020 at 2:10 PM Hean Seng  wrote:
> >
> > > I not sure where goes wrong,  are you running on CentOS 7 ? I have this
> > > error too, do you think is this contribute to the error as well:
> > >
> > > 2020-09-28 03:04:52,762 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> > > '/etc/cloudstack/agent/hooks/libvirt-vm-xml-transformer.groovy' is not
> > > available. Transformations will not be applied.
> > >
> > > 2020-09-28 03:04:52,762 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:4f23845b) Groovy scripting engine
> is
> > > not initialized. Data transformation skipped.
> > >
> > > 2020-09-28 03:04:53,083 WARN  [kvm.resource.LibvirtKvmAgentHook]
> > > (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> > > '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> > > available. Transformations will not be applied.
> > >
> > > On Mon, Sep 28, 2020 at 2:27 PM Ivan Kudryavtsev 
> wrote:
> > >
> > > > This just means you installed it in the wrong way. Ebtables and
> > Iptables
> > > > must be filled with rules like
> > > >
> > > > -A i-6242-10304-def -m state --state RELATED,ESTABLISHED -j ACCEPT
> > > > -A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
> > > > --physdev-is-bridged -m udp --sport 68 --dport 67 -j ACCEPT
> > > > -A i-6242-10304-def -p udp -m physdev --physdev-out vnet18
> > > > --physdev-is-bridged -m udp --sport 67 --dport 68 -j ACCEPT
> > > > -A i-6242-10304-def -m physdev --physdev-in vnet18
> --physdev-is-bridged
> > > -m
> > > > set ! --match-set i-6242-10304-vm src -j DROP
> > > > -A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
> > > > --physdev-is-bridged -m set --match-set i-6242-10304-vm src -m udp
> > > --dport
> > > > 53 -j RETURN
> > > > -A i-6242-10304-def -p tcp -m physdev --physdev-in vnet18
> > > > --physdev-is-bridged -m set --match-set i-6242-10304-vm src -m tcp
> > > --dport
> > > > 53 -j RETURN
> > > > -A i-6242-10304-def -m physdev --physdev-in vnet18
> --physdev-is-bridged
> > > -m
> > > > set --match-set i-6242-10304-vm src -j i-6242-10304-vm-eg
> > > > -A i-6242-10304-def -m physdev --physdev-out vnet18
> > --physdev-is-bridged
> > > -j
> > > > i-6242-10304-vm
> > > > -A i-6242-10304-vm -p udp -m udp --dport 1:65535 -m state --state NEW
> > -j
> > > > ACCEPT
> > > > -A i-6242-10304-vm -p tcp -m tcp --dport 1:65535 -m state --state NEW
> > -j
> > > > ACCEPT
> > > > -A i-6242-10304-vm -p icmp -m icmp --icmp-type any -j ACCEPT
> > > > -A i-6242-10304-vm -j DROP
> > > >
> > > >
> > > > Bridge chain: i-4435-8929-vm-in, entries: 7, policy: ACCEPT
> > > > -s ! 1e:0:32:0:2:2 -j DROP
> > > > -p ARP -s ! 1e:0:32:0:2:2 -j DROP
> > > > -p ARP --arp-mac-src ! 1e:0:32:0:2:2 -j DROP
> > > > -p ARP -j i-4435-8929-vm-in-ips
> > > > -p ARP --arp-op Request -j ACCEPT
> > > > -p ARP --arp-op Reply -j ACCEPT
> > > > -p ARP -j DROP
> > > >
> > > >
> > > >
> > > > On Mon, Sep 28, 2020 at

Re: Cloudstack Advance with Security Group

2020-09-28 Thread Ivan Kudryavtsev
Hi,

No, this is not the issue.
It's a normal state of the system, as KVM hooks are a new and optional
feature of 4.14.

You should find some sort of messages regarding security_groups at
/var/log/cloudstack/agent/security_group.log


On Mon, Sep 28, 2020 at 2:10 PM Hean Seng  wrote:

> I not sure where goes wrong,  are you running on CentOS 7 ? I have this
> error too, do you think is this contribute to the error as well:
>
> 2020-09-28 03:04:52,762 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> '/etc/cloudstack/agent/hooks/libvirt-vm-xml-transformer.groovy' is not
> available. Transformations will not be applied.
>
> 2020-09-28 03:04:52,762 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:4f23845b) Groovy scripting engine is
> not initialized. Data transformation skipped.
>
> 2020-09-28 03:04:53,083 WARN  [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> available. Transformations will not be applied.
>
> On Mon, Sep 28, 2020 at 2:27 PM Ivan Kudryavtsev  wrote:
>
> > This just means you installed it in the wrong way. Ebtables and Iptables
> > must be filled with rules like
> >
> > -A i-6242-10304-def -m state --state RELATED,ESTABLISHED -j ACCEPT
> > -A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
> > --physdev-is-bridged -m udp --sport 68 --dport 67 -j ACCEPT
> > -A i-6242-10304-def -p udp -m physdev --physdev-out vnet18
> > --physdev-is-bridged -m udp --sport 67 --dport 68 -j ACCEPT
> > -A i-6242-10304-def -m physdev --physdev-in vnet18 --physdev-is-bridged
> -m
> > set ! --match-set i-6242-10304-vm src -j DROP
> > -A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
> > --physdev-is-bridged -m set --match-set i-6242-10304-vm src -m udp
> --dport
> > 53 -j RETURN
> > -A i-6242-10304-def -p tcp -m physdev --physdev-in vnet18
> > --physdev-is-bridged -m set --match-set i-6242-10304-vm src -m tcp
> --dport
> > 53 -j RETURN
> > -A i-6242-10304-def -m physdev --physdev-in vnet18 --physdev-is-bridged
> -m
> > set --match-set i-6242-10304-vm src -j i-6242-10304-vm-eg
> > -A i-6242-10304-def -m physdev --physdev-out vnet18 --physdev-is-bridged
> -j
> > i-6242-10304-vm
> > -A i-6242-10304-vm -p udp -m udp --dport 1:65535 -m state --state NEW -j
> > ACCEPT
> > -A i-6242-10304-vm -p tcp -m tcp --dport 1:65535 -m state --state NEW -j
> > ACCEPT
> > -A i-6242-10304-vm -p icmp -m icmp --icmp-type any -j ACCEPT
> > -A i-6242-10304-vm -j DROP
> >
> >
> > Bridge chain: i-4435-8929-vm-in, entries: 7, policy: ACCEPT
> > -s ! 1e:0:32:0:2:2 -j DROP
> > -p ARP -s ! 1e:0:32:0:2:2 -j DROP
> > -p ARP --arp-mac-src ! 1e:0:32:0:2:2 -j DROP
> > -p ARP -j i-4435-8929-vm-in-ips
> > -p ARP --arp-op Request -j ACCEPT
> > -p ARP --arp-op Reply -j ACCEPT
> > -p ARP -j DROP
> >
> >
> >
> > On Mon, Sep 28, 2020 at 1:10 PM Hean Seng  wrote:
> >
> > > I checked the hypervisor , it seems iptables is nothing inside ,  this
> is
> > > centos7 ,  initially i turnoff firewalld ,  but even i turn on it now
> and
> > > try to update the security group rules, it seems empty iptable rules :
> > >
> > > [root@kvm03 ~]# iptables -L -v -n
> > >
> > > Chain INPUT (policy ACCEPT 82903 packets, 1170M bytes)
> > >
> > >  pkts bytes target prot opt in out source
> > > destination
> > >
> > >
> > > Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
> > >
> > >  pkts bytes target prot opt in out source
> > > destination
> > >
> > >
> > > Chain OUTPUT (policy ACCEPT 80505 packets, 25M bytes)
> > >
> > >  pkts bytes target prot opt in out source
> > > destination
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Mon, Sep 28, 2020 at 12:05 PM Pearl d'Silva <
> > pearl.dsi...@shapeblue.com
> > > >
> > > wrote:
> > >
> > > > Hi Hean,
> > > >
> > > > In an Advanced Zone with Security Groups enabled, by default, egress
> > > > traffic from the VM is allowed, while Ingress traffic is denied.
> Hence,
> > > as
> > > > you rightly mentioned, security group rules are added accordingly.
> > These
> > > > rules get added on the hypervisor host, and you can verify them, by
> > going
> > >

Re: Cloudstack Advance with Security Group

2020-09-28 Thread Ivan Kudryavtsev
This just means you installed it in the wrong way. Ebtables and Iptables
must be filled with rules like

-A i-6242-10304-def -m state --state RELATED,ESTABLISHED -j ACCEPT
-A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
--physdev-is-bridged -m udp --sport 68 --dport 67 -j ACCEPT
-A i-6242-10304-def -p udp -m physdev --physdev-out vnet18
--physdev-is-bridged -m udp --sport 67 --dport 68 -j ACCEPT
-A i-6242-10304-def -m physdev --physdev-in vnet18 --physdev-is-bridged -m
set ! --match-set i-6242-10304-vm src -j DROP
-A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
--physdev-is-bridged -m set --match-set i-6242-10304-vm src -m udp --dport
53 -j RETURN
-A i-6242-10304-def -p tcp -m physdev --physdev-in vnet18
--physdev-is-bridged -m set --match-set i-6242-10304-vm src -m tcp --dport
53 -j RETURN
-A i-6242-10304-def -m physdev --physdev-in vnet18 --physdev-is-bridged -m
set --match-set i-6242-10304-vm src -j i-6242-10304-vm-eg
-A i-6242-10304-def -m physdev --physdev-out vnet18 --physdev-is-bridged -j
i-6242-10304-vm
-A i-6242-10304-vm -p udp -m udp --dport 1:65535 -m state --state NEW -j
ACCEPT
-A i-6242-10304-vm -p tcp -m tcp --dport 1:65535 -m state --state NEW -j
ACCEPT
-A i-6242-10304-vm -p icmp -m icmp --icmp-type any -j ACCEPT
-A i-6242-10304-vm -j DROP


Bridge chain: i-4435-8929-vm-in, entries: 7, policy: ACCEPT
-s ! 1e:0:32:0:2:2 -j DROP
-p ARP -s ! 1e:0:32:0:2:2 -j DROP
-p ARP --arp-mac-src ! 1e:0:32:0:2:2 -j DROP
-p ARP -j i-4435-8929-vm-in-ips
-p ARP --arp-op Request -j ACCEPT
-p ARP --arp-op Reply -j ACCEPT
-p ARP -j DROP



On Mon, Sep 28, 2020 at 1:10 PM Hean Seng  wrote:

> I checked the hypervisor , it seems iptables is nothing inside ,  this is
> centos7 ,  initially i turnoff firewalld ,  but even i turn on it now and
> try to update the security group rules, it seems empty iptable rules :
>
> [root@kvm03 ~]# iptables -L -v -n
>
> Chain INPUT (policy ACCEPT 82903 packets, 1170M bytes)
>
>  pkts bytes target prot opt in out source
> destination
>
>
> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>
>  pkts bytes target prot opt in out source
> destination
>
>
> Chain OUTPUT (policy ACCEPT 80505 packets, 25M bytes)
>
>  pkts bytes target prot opt in out source
> destination
>
>
>
>
>
>
>
> On Mon, Sep 28, 2020 at 12:05 PM Pearl d'Silva  >
> wrote:
>
> > Hi Hean,
> >
> > In an Advanced Zone with Security Groups enabled, by default, egress
> > traffic from the VM is allowed, while Ingress traffic is denied. Hence,
> as
> > you rightly mentioned, security group rules are added accordingly. These
> > rules get added on the hypervisor host, and you can verify them, by going
> > into the host and searching for iptables rules corresponding to the VM
> > (internal name - i-x-y-VM).
> > This blog maybe helpful in providing further details:
> >
> >
> https://shankerbalan.net/blog/cloudstack-advanced-zone-with-security-groups/
> >
> > Thanks,
> > Pearl
> > 
> > From: Hean Seng 
> > Sent: Sunday, September 27, 2020 2:48 PM
> > To: users@cloudstack.apache.org 
> > Subject: Cloudstack Advance with Security Group
> >
> > Hi
> >
> > I created advance zone with security group, all working fine.
> >
> > But VMcreated , seems the default security group that assigned to the VM.
> > all accept policy , i understand  is Default Deny, and once add in the
> port
> > in Security Group Ingress and Egress, only is allowed
> >
> > Also, is this rules created at VirtualRouter of the SharedNetwork, or at
> > the Hypervisor?
> >
> >
> >
> > --
> > Regards,
> > Hean Seng
> >
> > pearl.dsi...@shapeblue.com
> > www.shapeblue.com
> > 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> > @shapeblue
> >
> >
> >
> >
>
> --
> Regards,
> Hean Seng
>


Re: DHCP in SharedNetwork

2020-09-25 Thread Ivan Kudryavtsev
To precise, look at:
iptables-save
ebtables -t nat -L

https://github.com/apache/cloudstack/blob/master/scripts/vm/network/security_group.py

пт, 25 сент. 2020 г., 16:58 Hean Seng :

> Ok. Let me look on that.
>
> Thank you
>
> On Fri, Sep 25, 2020 at 4:36 PM Ivan Kudryavtsev  wrote:
>
> > SG have default unoverridable rules which do that. No matter how you
> assign
> > ip addresses, they still will block some kind of traffic.
> >
> > Why would not you just type iptables-save on any vps host and take a
> look?
> >
> > пт, 25 сент. 2020 г., 15:17 Hean Seng :
> >
> > > If using security group to block it,  Share Guest network not able to
> use
> > > securitygroup right ?
> > >
> > >
> > > Actually the better way is inject proper CloudInit  value to set IP
> > instead
> > > of DHCP.
> > >
> > >
> > >
> > >
> > >
> > > On Fri, Sep 25, 2020 at 4:12 PM Ivan Kudryavtsev 
> wrote:
> > >
> > > > Hi,
> > > >
> > > > no way. Security groups block illegal dhcp servers.
> > > >
> > > > пт, 25 сент. 2020 г., 13:20 Hean Seng :
> > > >
> > > > > Hi
> > > > >
> > > > > Cloudstack use DHCP to allocate IP to VM.
> > > > >
> > > > > Do you all have issue if some of the VM in same Network, if other
> VM
> > > > > accidentally announce other DHCP , and it affect the deployment of
> IP
> > > of
> > > > > Cloudstack,
> > > > >
> > > > > It means 2 DHCP in some network .
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Regards,
> > > > > Hean Seng
> > > > >
> > > >
> > >
> > >
> > > --
> > > Regards,
> > > Hean Seng
> > >
> >
>
>
> --
> Regards,
> Hean Seng
>


Re: DHCP in SharedNetwork

2020-09-25 Thread Ivan Kudryavtsev
SG have default unoverridable rules which do that. No matter how you assign
ip addresses, they still will block some kind of traffic.

Why would not you just type iptables-save on any vps host and take a look?

пт, 25 сент. 2020 г., 15:17 Hean Seng :

> If using security group to block it,  Share Guest network not able to use
> securitygroup right ?
>
>
> Actually the better way is inject proper CloudInit  value to set IP instead
> of DHCP.
>
>
>
>
>
> On Fri, Sep 25, 2020 at 4:12 PM Ivan Kudryavtsev  wrote:
>
> > Hi,
> >
> > no way. Security groups block illegal dhcp servers.
> >
> > пт, 25 сент. 2020 г., 13:20 Hean Seng :
> >
> > > Hi
> > >
> > > Cloudstack use DHCP to allocate IP to VM.
> > >
> > > Do you all have issue if some of the VM in same Network, if other VM
> > > accidentally announce other DHCP , and it affect the deployment of IP
> of
> > > Cloudstack,
> > >
> > > It means 2 DHCP in some network .
> > >
> > >
> > >
> > > --
> > > Regards,
> > > Hean Seng
> > >
> >
>
>
> --
> Regards,
> Hean Seng
>


Re: DHCP in SharedNetwork

2020-09-25 Thread Ivan Kudryavtsev
Hi,

no way. Security groups block illegal dhcp servers.

пт, 25 сент. 2020 г., 13:20 Hean Seng :

> Hi
>
> Cloudstack use DHCP to allocate IP to VM.
>
> Do you all have issue if some of the VM in same Network, if other VM
> accidentally announce other DHCP , and it affect the deployment of IP of
> Cloudstack,
>
> It means 2 DHCP in some network .
>
>
>
> --
> Regards,
> Hean Seng
>


Re: slow performance of vm on gluster

2020-04-24 Thread Ivan Kudryavtsev
I use gluster on ssd r5 (two replicas + arbiter) with ACS for those who
need VM HA. Works fine, but I doubt it will work fine for HDD RAID5, as it
is only for linear workloads without bbu, and other tricks.

пт, 24 апр. 2020 г., 21:15 :

> Hi,
>
> I would not use Gluster in production for VM workloads, perhaps as
> secondary storage where there are mostly sequential writes involved
> rather than load of random I/O, it would be fast at that.
>
> CEPH is a much better choice, it's user base is an order of magnitude
> larger and so many more problems and corner cases covered.
> CEPH is however also much more complex to deploy and maintain so you
> should do a few trials before that.
>
> You should be able to get a HA CEPH deployment up and running, I do not
> think they do deduplication though and neither GlusterFS afaik. I would
> imagine any deduplication process would put a crippling amount of load
> on the performance or require ludicrous amounts of extra resources.
>
> /imho
>
> Regards
>
> On 2020-04-24 13:05, Pratik Chandrakar wrote:
> > Hello,
> > I am using gluster 5.11 (3 node replication on raid 5) as a primary
> > storage for cloudstack 4.11.2 on Centos 7.7, the setup is running
> > stable for more then a year but performance of VM degraded very much
> > with the increasing number of active VMs, UI response, boot time of
> > VMs are also slow. One more problem which I face with the VMs which
> > has more than 500 GB of storage doesn't start in case of stopped
> > status without manual intervention of attaching and detaching of data
> > volumes. Therefore I am planning to migrate to Ceph RBD. Will it be a
> > right choice or something else should be considered because HA and
> > deduplication is a must have requirement??
> > I know it's a naive question but didn't found answer on google.
>


Re: ACS Resources of 3 Hosts for 1VM | Work ?

2020-02-22 Thread Ivan Kudryavtsev
No way. Just read the basics of SMP, MPP, NUMA computing.

сб, 22 февр. 2020 г., 15:15 Cloud Udupi :

> Hi all,
> We are looking for a solution to combine the CPU Cores and RAM of 3 Servers
> to meet our requirement for a VM in ACS which is used for heavy workload.
>
> *We are using ACS 4.13 with CentOS 7.6 (Kernel: Linux
> 3.10.0-1062.el7.x86_64), *
>
> *Each node has 4Core and 16GB of RAM with dual ethernet port. (Total of 12
> core and 48GB RAM = 1VM).*
>
> *All the nodes has been configured for HA.*
> *VM is working fine when I create with 4Core and 16GB RAM. HA also working
> fine, able to migrate the VM from one to another available node.*
>
> *I want to use all the resource available for 1VM, (Total of 12 core and
> 48GB RAM = 1VM). Will that work?.*
>
> If yes,
> Then we also want to know if one node goes down, will the VM still
> function?.
>
> Regards,
> Mark.
>


Re: [DISCUSS] Honouring listall=true in API calls to include project resources

2020-02-19 Thread Ivan Kudryavtsev
This is s nice improvement.

ср, 19 февр. 2020 г., 15:41 Rohit Yadav :

> All,
>
> Many list APIs, such as the listRouters API, accept a `listall` parameter
> as well as a `projectid` parameter. Currently, on calling a list API with
> listall=true and projectid=-1 it only returns resources belonging to all
> projects, the listall=true parameter is effectively ignored.
>
> We've come up with a PR that fixes the list APIs (mainly for Primate) to
> return all the resources including project when both listall=true and
> projectid=-1 are passed, by a non-normal user (i.e. the admin and
> domain-admin user):
>
> https://github.com/apache/cloudstack/pull/3894/files (the PR also fixed
> incorrect use in old UI)
>
>
> This will fix the multiple-api calling hack and Primate would be able to
> say list all routers in Infra->Routers with a single API call.
>
> In current UI, for example, to see all the routers under Infra -> Routers,
> two API calls are made with and without projectid=-1. The code in fact
> ignores the listall=true when projectid=-1 is used.
>
>
> However, this may break "soft" compatibility when both
> listall=true=-1 are passed for some list APIs, as:
>
>   *   Old behaviour: will only returns resources belonging to a project,
> only to admin and domain admin
>   *   New behaviour: will return all resources including project
> resources, only to admin and domain admin
>   *   Additional notes: normal user (not an admin, or a domain admin etc)
> will not be affected
>
> The listall parameter is documented as "if set to true - list resources
> that the caller is authorized to see", PR intends to fix this behaviour bug.
>
> As far as I can tell the projectid=-1 is only used in the current UI, any
> users, dev want to share their concerns, thoughts?
>
> Regards,
>
> Rohit Yadav
>
> Software Architect, ShapeBlue
>
> https://www.shapeblue.com
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
>
>
>
>


Re: Redundant NFS Storage for ACS

2020-02-01 Thread Ivan Kudryavtsev
You have to deploy HA NFS outside Cloudstack. CS doesn't care about storage
fault tolerance.

Gluster is fine (shared mountpoint), Ceph is fine too, HA Nfs can be
deployed with certain approaches or with proprietary appliances.

сб, 1 февр. 2020 г., 19:28 Cloud Udupi :

> Hi,
> We are new to Apache CloudStack. We are looking for a Primary Storage (NFS
> share) solution, where it does not fail because of single node failure. Is
> there a way where I can use the NFS via any kind of clustering, so that
> when one node fails i will still have the VM's working from another node
> which is in ACS using the NFS cluster.
>
> Has anyone done the Ceph Storage as NFS (NFS Ganesha) and used it for the
> ACS on CentOS 7. Please share the steps so that we can look into it.
>
> Basically we need a system that has:-
> 1. One single point IP address with the shared mount point being same.
> 2. NFS storage, as Apache CloudStack supports HA only with NFS.
> 3. I need to deploy around around 60 VM's for our application.
>
> If NFS storage having the VM's goes down and not able to get back. How to
> fix this, so that we can get back the VM's in running state.
>
> Regards,
> Mark.
>


Re: Extra parameters to KVM instances

2020-01-26 Thread Ivan Kudryavtsev
Yes, my PR will support your case if you implement assigning logic in
hooks. I do it specially to pass thru free GPUs, certain USB devices and
specific unmanaged VXLANS to VM.
PR can be found here. I tested it and hopefully, it will be added in 4.14,
but I will port it back for our org to 4.11.2/3.

https://github.com/apache/cloudstack/pull/3839


On Mon, Jan 27, 2020 at 1:52 PM Sakari Poussa  wrote:

> Hi Ivan,
>
> Thanks for the information. It was useful.
>
> Let me elaborate by use cases a bit more. I have PCI host devices with
> sriov capabilities. I may have 48 virtual functions (VF) on a host. I want
> to assign the VFs to some VM but not all. So I need some control which VMs
> gets the VF and others don't. Also, I need to keep track which VFs are
> already assigned and which are free. Lastly, I want to expose the VFs to
> containers running on VMs created by the upcoming Cloudstack Kubernetes
> Service (AKS, pull #3680).
>
> Looking at the first feature you mentioned, I don't think I can use that.
> It has no control on which VMs to add the extraconfig. It is all or
> nothing.
>
> The second feature, which you started to work on seems to have more
> potential. Do you see it can support my use case?
>
> Thanks, Sakari
>
>
>
> On Fri, Jan 24, 2020 at 6:08 PM Ivan Kudryavtsev  wrote:
>
> > Sakari, looks like you are looking for this one:
> > https://github.com/apache/cloudstack/pull/3510
> >
> > Also, Im working on implementation, which handles it another way:
> > https://github.com/apache/cloudstack/issues/3823
> >
> > пт, 24 янв. 2020 г., 17:45 Sakari Poussa :
> >
> > > Hi,
> > >
> > > Is there a way to pass extra parameters to KVM VMs when they start?
> That
> > > is, to the qemu.system-x86_64 command.
> > >
> > > I am looking a way to expose SRIOV PCI device to the VM and I need to
> > pass
> > > extra parameters like this
> > >
> > > qemu.system-x86_64 -device vfio-pci,host=3f:01.0 
> > >
> > > Is this possible somehow?
> > >
> > > --
> > > Thanks, Sakari
> > >
> >
>
>
> --
> Sakari Poussa
> 040 348 2970
>


Re: Extra parameters to KVM instances

2020-01-24 Thread Ivan Kudryavtsev
Sakari, looks like you are looking for this one:
https://github.com/apache/cloudstack/pull/3510

Also, Im working on implementation, which handles it another way:
https://github.com/apache/cloudstack/issues/3823

пт, 24 янв. 2020 г., 17:45 Sakari Poussa :

> Hi,
>
> Is there a way to pass extra parameters to KVM VMs when they start? That
> is, to the qemu.system-x86_64 command.
>
> I am looking a way to expose SRIOV PCI device to the VM and I need to pass
> extra parameters like this
>
> qemu.system-x86_64 -device vfio-pci,host=3f:01.0 
>
> Is this possible somehow?
>
> --
> Thanks, Sakari
>


Re: CloudStack-UI (CSUI) HTTP access helper demonstration

2020-01-12 Thread Ivan Kudryavtsev
Hi Sven,

yes it is. Despite we don't develop actively now because of no spare
engineers - bugs are being fixed and we develop small functions when have
spare time.

вс, 12 янв. 2020 г., 21:31 Sven Vogel :

> Hi Ivan,
>
> Nothing heard for a long time. Nice video. Is this implemented in your UI?
>
> Cheers
>
> Sven
>
>
> __
>
> Sven Vogel
> Teamlead Platform
>
> EWERK DIGITAL GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 99
> F +49 341 42649 - 98
> s.vo...@ewerk.com
> www.ewerk.com
>
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> Registergericht: Leipzig HRB 9065
>
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 2-1:2011
>
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
> Vielen Dank.
>
> The contents of this e-mail (including any attachments) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachments) from your system. Thank you.
> > Am 07.01.2020 um 05:47 schrieb Ivan Kudryavtsev :
> >
> > Hello, community,
> >
> > I recorded a quick video that demonstrates how CSUI can be used to
> > demonstrate an integrated template which rolls docker-compose passed
> > through VM UserData, tracks the deployment with special in-VM tracking
> > script install-monitor and enables simplified access to web-tracking
> script
> > through special template tags.
> >
> > Watch it on Youtube:
> > https://www.youtube.com/watch?v=JJRC9nEWnvw=youtu.be
> >
> > Sorry, it's partially Russian content, but our customer doesn't have
> in-VM
> > tracking script localization, as its cloud is for the local audience.
> >
> > Best regards, Ivan
>
>


CloudStack-UI (CSUI) HTTP access helper demonstration

2020-01-06 Thread Ivan Kudryavtsev
Hello, community,

I recorded a quick video that demonstrates how CSUI can be used to
demonstrate an integrated template which rolls docker-compose passed
through VM UserData, tracks the deployment with special in-VM tracking
script install-monitor and enables simplified access to web-tracking script
through special template tags.

Watch it on Youtube:
https://www.youtube.com/watch?v=JJRC9nEWnvw=youtu.be

Sorry, it's partially Russian content, but our customer doesn't have in-VM
tracking script localization, as its cloud is for the local audience.

Best regards, Ivan


Re: disk total vs disk allocated

2019-09-17 Thread Ivan Kudryavtsev
Disk allocated != Disk used. It's for how much all volumes will span when
their thin provisioning optimization stops and they fully use the space.

вт, 17 сент. 2019 г., 13:00 Piotr Pisz :

> Hi all,
>
> I have a strange situation, we have a CephFS share mounted as
> SharedMountPoint.
> CS shows Disk Total as 12T (that's ok), while Disk Allocated shows like
> 11.8T (it's not ok, disk is 50% full).
> How can We diagnose the cause?
>
> Regards,
> Piotr
>
>


Re: VirtIO not detected by guest on new ACS with new KVM host. I have moved a CentOS 7 template from another ACS to new one. The guests deployed from same template on old ACS detects disk devices as v

2019-09-02 Thread Ivan Kudryavtsev
Virtio scsi is detected as sdX. It's absolutely fine.

пн, 2 сент. 2019 г., 19:32 Andrija Panic :

> lspci inside that OS ?
>
> On Mon, 2 Sep 2019 at 14:25, Fariborz Navidan 
> wrote:
>
> > Hi,
> >
> > XML says it is using virtio-scsi but guest detects disk device at
> /dev/sda
> >
> > [root@fr-kvm1 ~]# virsh dumpxml i-2-53-VM | grep controller
> >   
> >   
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >   
> >
> >
> > On Mon, Sep 2, 2019 at 10:12 AM Andrija Panic 
> > wrote:
> >
> > > Kindly double check if both OS type and the controller defined are
> > > identical - it makes no sense to me to use same settings but get
> > different
> > > results.
> > >
> > > Also, try CentOS 7.2 on the new installation (though, explicitly
> settings
> > > the controller on the template should override whatever controller is
> > > defined IN the OS Type).
> > >
> > > Can you do the "lspci" inside the OS/VM, to show the controller? Or
> > > actually "virsh dumpxml i-x-y-VM" would be good as well (grepping the
> > > controller...)
> > >
> > > Best,
> > > Andrija
> > >
> > > On Sun, Sep 1, 2019, 23:37 Fariborz Navidan 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > :D I just noticed right now that I have pasted my question into
> subject
> > > > line instead of message body ;D
> > > >
> > > > Both are version 4.12.0.0. A guest running CentOS 7.6 on old ACS
> > detects
> > > > disk as vda but same guest on  new one detects as sda. Templates on
> ACS
> > > > installation. OS type "CentOS 7" gives virtio access to CentOS 7.6 on
> > old
> > > > ACS.
> > > > 
> > > >
> > > > On Mon, Sep 2, 2019 at 1:18 AM Andrija Panic <
> andrija.pa...@gmail.com>
> > > > wrote:
> > > >
> > > > > That's a nice email subject indeed :)
> > > > >
> > > > > For template, make sure that you are using identical OS type for
> that
> > > > > template, on the new ACS as on the old one. Are these ACS
> > installations
> > > > of
> > > > > the same version?
> > > > >
> > > > > Keep in mind that "CentOS 7" is not the same as "CentOS 7.2" from
> ACS
> > > > > perspective and the last one will consume virtio, while the
> previous
> > > one
> > > > > will spin IDE controller inside a VM...
> > > > >
> > > > > Andrija
> > > > >
> > > > > On Sun, Sep 1, 2019, 12:14 Fariborz Navidan  >
> > > > wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > I have configured a new ACS?
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
>
> Andrija Panić
>


Re: Filtering DHCP traffic

2019-08-08 Thread Ivan Kudryavtsev
Even when no SGs used, the agent still creates iptables/ebtables rules and
should block mac/ip spoofing, wrong dhcp announces. Im not sure how it
works in the current CS version, but believe it:

- either local bug which must be investigated thru agent logs and
iptables/ebtables dumps

- cs bug which was introduced recently.

We have ancient acs 4.3 with basic zone without sg and no dhcp faking works
there. Unfortunately now all my zones with SGs, so cannot check...

пт, 9 авг. 2019 г., 4:17 Andrija Panic :

> Nope, that is the reason security groups should be used in multi-tenant
> shared network... At least I'm not aware that is possible.
> Not sure if hacking the DB is possible though...
>
> On Thu, 8 Aug 2019, 20:58 Fariborz Navidan,  wrote:
>
> > Hello,
> > I have found a user VM who is running a sort of DHCP server i.e. a VPN
> > server, etc. User VM is on default shared network without security groups
> > enabled in a Basic zone which does not spport multiple networks. Is there
> > any way to either enable security groups on existing network and add rule
> > to stop VMs offer DHCP and prevent conflicting with VR's DHCP or manually
> > add a firewall rule on VR to filter DHCP traffic from user VMs?
> >
> > TIA
> >
>


Re: Additional local storage as primary storage

2019-07-17 Thread Ivan Kudryavtsev
Shared mountpoint is ok

чт, 18 июл. 2019 г., 12:14 Fariborz Navidan :

> Hi
>
> I have already used this way. I feel local NFS mount point adds another
> layer over local storage and can affect IO speed and performance. What do
> you think of it? What about SharedMountPoint option? Between this and local
> NFS which one offers better performance?
>
> Thanks
>
> On Thu, Jul 18, 2019 at 5:38 AM Ivan Kudryavtsev  >
> wrote:
>
> > Hi,
> >
> > As for 4.11.2, no way to have multiple local storages configured for a
> > single host. There is no simple way to overcome it. The only one I see
> is a
> > pretty ugly - locally mounted NFS, created as a cluster wide storage when
> > only a single host added to a single cluster...
> >
> > In short, it's not supported, only one local storage per host. It's a
> great
> > feature request, but unsure many people use that topology.
> >
> > чт, 18 июл. 2019 г., 4:04 Fariborz Navidan :
> >
> > > Hello,
> > >
> > > I have a few mount points which refer to different block devices on
> local
> > > machine. I am trying to add them as additional primary local storage
> to
> > > CS, Unfortunately, when adding primary storage there is no Filesystem
> > > option to choose. As a result I have managed to modify the storage_pool
> > > table settting the storage type to Filesystem. Then shows its state as
> > > "Up". However it because the path is under / such as /home and / is on
> > > different disk, it mistakenly detects the storage capcity as it is for
> > root
> > > filesystem and not the real size of filesystem /home belongs to.
> > >
> > > Any idea how to fix this?
> > >
> > > Thanks
> > >
> >
>


Re: Additional local storage as primary storage

2019-07-17 Thread Ivan Kudryavtsev
Hi,

As for 4.11.2, no way to have multiple local storages configured for a
single host. There is no simple way to overcome it. The only one I see is a
pretty ugly - locally mounted NFS, created as a cluster wide storage when
only a single host added to a single cluster...

In short, it's not supported, only one local storage per host. It's a great
feature request, but unsure many people use that topology.

чт, 18 июл. 2019 г., 4:04 Fariborz Navidan :

> Hello,
>
> I have a few mount points which refer to different block devices on local
> machine. I am trying to add them as additional primary local storage  to
> CS, Unfortunately, when adding primary storage there is no Filesystem
> option to choose. As a result I have managed to modify the storage_pool
> table settting the storage type to Filesystem. Then shows its state as
> "Up". However it because the path is under / such as /home and / is on
> different disk, it mistakenly detects the storage capcity as it is for root
> filesystem and not the real size of filesystem /home belongs to.
>
> Any idea how to fix this?
>
> Thanks
>


Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Ivan Kudryavtsev
ZFS is not a good choice for high IO applications. Use the most simple
layering as possible.

пн, 15 июл. 2019 г., 18:50 Christoffer Pedersen :

> Hello,
>
> ZFS is unfortunately not supported, otherwise I would have recommended
> that. But if you are going local systems (no nfs/iscsi), ext4 would be the
> way to go.
>
> On Mon, Jul 15, 2019 at 1:23 PM Ivan Kudryavtsev  >
> wrote:
>
> > Hi,
> >
> > if you use local fs, use just ext4 over the required disk topology which
> > gives the desired redundancy.
> >
> > E.g. JBOD, R0 work well when data safety policy is established and
> backups
> > are maintained well.
> >
> > Otherwise look to R5, R10 or R6.
> >
> > пн, 15 июл. 2019 г., 18:05 :
> >
> > > Isn't that a bit apples and oranges? Ceph is a network distributed
> > > thingy, not a local solution.
> > >
> > > I'd use linux/software raid + lvm, it's the only one supported (by
> > > CentOS/RedHat).
> > >
> > > ZFS on Linux could be interesting if it was supported by Cloudstack,
> but
> > > it is not, you'd end up using qcow2 (COW) files on top of a COW
> > > filesystem which could lead to issues. Also ZFS is not really the
> > > fastest fs out there, though it does have some nice features.
> > >
> > > Did you really mean raid 0? I hope you have backups. :)
> > >
> > > hth
> > >
> > >
> > > On 2019-07-15 11:49, Fariborz Navidan wrote:
> > > > Hello,
> > > >
> > > > Which one do you think is faster to use for local soft Raid-0 for
> > > > primary
> > > > storage? Ceph, ZFS or Built-in soft raid manager of CentOS? Which one
> > > > can
> > > > gives us better IOPS and IO latency on NVMe SSD disks? The storage
> will
> > > > be
> > > > used for production cloud environment where arround 60 VMs will run
> on
> > > > top
> > > > of it.
> > > >
> > > > Your ides are highly appreciated
> > >
> >
>
>
> --
> Thanks,
> Chris pedersen
>


Re: [VOTE] Ceph, ZFS or Linux Soft RAID?

2019-07-15 Thread Ivan Kudryavtsev
Hi,

if you use local fs, use just ext4 over the required disk topology which
gives the desired redundancy.

E.g. JBOD, R0 work well when data safety policy is established and backups
are maintained well.

Otherwise look to R5, R10 or R6.

пн, 15 июл. 2019 г., 18:05 :

> Isn't that a bit apples and oranges? Ceph is a network distributed
> thingy, not a local solution.
>
> I'd use linux/software raid + lvm, it's the only one supported (by
> CentOS/RedHat).
>
> ZFS on Linux could be interesting if it was supported by Cloudstack, but
> it is not, you'd end up using qcow2 (COW) files on top of a COW
> filesystem which could lead to issues. Also ZFS is not really the
> fastest fs out there, though it does have some nice features.
>
> Did you really mean raid 0? I hope you have backups. :)
>
> hth
>
>
> On 2019-07-15 11:49, Fariborz Navidan wrote:
> > Hello,
> >
> > Which one do you think is faster to use for local soft Raid-0 for
> > primary
> > storage? Ceph, ZFS or Built-in soft raid manager of CentOS? Which one
> > can
> > gives us better IOPS and IO latency on NVMe SSD disks? The storage will
> > be
> > used for production cloud environment where arround 60 VMs will run on
> > top
> > of it.
> >
> > Your ides are highly appreciated
>


Re: Network RX/TX and I/O monitoring for hosts

2019-06-28 Thread Ivan Kudryavtsev
With VirtIO you don't have host aggregated IO, net numbers (afaik), but for
cpu & ram it's true

пт, 28 июн. 2019 г., 21:11 Rakesh v :

> Correct me if I'm wrong, I think it can be done using virtio commands
> also. It exposes cpu, memory, disk and network stats of the VM which can be
> exported using libvirt exporter
>
> Sent from my iPhone
>
> > On 28-Jun-2019, at 3:51 PM, Ivan Kudryavtsev 
> wrote:
> >
> > Hi. It's easily done with Zabbix. Whole variety of underlying topologies
> is
> > too difficult to monitor with prebuilt monitoring system... anyway, you
> can
> > code it!
> >
> > пт, 28 июн. 2019 г., 20:30 Fariborz Navidan :
> >
> >> Hello All,
> >>
> >> Does ACS provide a way to monitor a host's network bandwidth (RX/TX) and
> >> block storage IOPS  and IO read/write rate for a host?
> >>
> >> Thanks
> >>
>


Re: Network RX/TX and I/O monitoring for hosts

2019-06-28 Thread Ivan Kudryavtsev
Hi. It's easily done with Zabbix. Whole variety of underlying topologies is
too difficult to monitor with prebuilt monitoring system... anyway, you can
code it!

пт, 28 июн. 2019 г., 20:30 Fariborz Navidan :

> Hello All,
>
> Does ACS provide a way to monitor a host's network bandwidth (RX/TX) and
> block storage IOPS  and IO read/write rate for a host?
>
> Thanks
>


Re: Unable to ping/ssh my VMs after a stop/start

2019-06-03 Thread Ivan Kudryavtsev
Daniel,

why you think you should be able to ping from the hypervisor? Normally, you
have to add ip in the same net to the bridge to ping tun/tap. I'm not sure,
but if it doesn't work after the step above, check your iptables/ebtables
to check every rule is ok.

Commands:
Iptables-save
Ebtables -t nat
Ipset -L

пн, 3 июн. 2019 г., 18:14 daniel.bell...@outlook.be <
daniel.bell...@outlook.be>:

> Hello,
>
> I'm a new cloudstack user. I've started with a fresh install on 2 ubuntu
> 18.04 boxes : 1 for the management server , 1 for a KVM node.
> I'm able to create ubuntu VMs to which I can connect via ssh from the KVM
> node.
> But when I stop/start the VMs, I can no longer ping or ssh the VMs.
>
> 
> The sequence of operations are :
> 
>
> 0) Check bridge on KVM node before VM creation
> --
> xxx@node001:~$ brctl show cloudbr0
> bridge name bridge id   STP enabled interfaces
> cloudbr08000.ce1467c23351   no  eno1
> vnet0
> vnet3
> vnet4
> vnet6
> vnet7
>
> 1) Create a VM instance from the web UI
> ---
> => IP 192.168.0.61
> => internal name i-2-21-VM
> xxx@node001:~$ brctl show cloudbr0
> bridge name bridge id   STP enabled interfaces
> cloudbr08000.ce1467c23351   no  eno1
> vnet0
> vnet3
> vnet4
> vnet6
> vnet7
> vnet8
> xxx@node001:~$ ifconfig vnet8
> vnet8: flags=4163  mtu 1500
> inet6 fe80::fc00:13ff:fe00:16  prefixlen 64  scopeid 0x20
> ether fe:00:13:00:00:16  txqueuelen 1000  (Ethernet)
> RX packets 22  bytes 2540 (2.5 KB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 37  bytes 5398 (5.3 KB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> => arp entry : 192.168.0.61 ether   1e:00:13:00:00:16   C
>cloudbr0
>
> 2) From the KVM node, ssh or ping 192.168.0.61 works perfectly
> --
> xxx@node001:~$ ping 192.168.0.61
> PING 192.168.0.61 (192.168.0.61) 56(84) bytes of data.
> 64 bytes from 192.168.0.61: icmp_seq=1 ttl=64 time=0.631 ms
> 64 bytes from 192.168.0.61: icmp_seq=2 ttl=64 time=0.209 ms
> 64 bytes from 192.168.0.61: icmp_seq=3 ttl=64 time=0.226 ms
> ^C
> --- 192.168.0.61 ping statistics ---
> 3 packets transmitted, 3 received, 0% packet loss, time 2056ms
> rtt min/avg/max/mdev = 0.209/0.355/0.631/0.195 ms
>
> 3) Stop the VM instance from the UI
> ---
> xxx@node001:~$ brctl show cloudbr0
> bridge name bridge id   STP enabled interfaces
> cloudbr08000.ce1467c23351   no  eno1
> vnet0
> vnet3
> vnet4
> vnet6
> vnet7
> xxx@node001:~$ ping 192.168.0.61
> PING 192.168.0.61 (192.168.0.61) 56(84) bytes of data.
> ^C
> --- 192.168.0.61 ping statistics ---
> 3 packets transmitted, 0 received, 100% packet loss, time 2044ms
>
> => arp entry : 192.168.0.61 (incomplete)
> cloudbr0
>
> 4) Start the VM instance from the UI
> 
> xxx@node001:~$ brctl show cloudbr0
> bridge name bridge id   STP enabled interfaces
> cloudbr08000.ce1467c23351   no  eno1
> vnet0
> vnet3
> vnet4
> vnet6
> vnet7
> vnet8
>
> xxx@node001:~$ ifconfig vnet8
> vnet8: flags=4163  mtu 1500
> inet6 fe80::fc00:13ff:fe00:16  prefixlen 64  scopeid 0x20
> ether fe:00:13:00:00:16  txqueuelen 1000  (Ethernet)
> RX packets 89  bytes 4330 (4.3 KB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX 

Re: multi-hipervisor deployment

2019-05-27 Thread Ivan Kudryavtsev
Alejandro, what hv libvirt logs show?

вт, 28 мая 2019 г., 4:49 Alejandro Ruiz Bermejo :

> Hi,
> I have installed cloudstack 4.11.2 on ubuntu 16 with one management server
> and one compute node with kvm.
>
> I want to add a new compute node with a different hipervisor (lxc) on
> another server with ubuntu 16. I'm following the docs instructions but when
> i try to add the new host i get the following error on
> /var/log/cloudstack/agent.log
>
> 2019-05-27 17:43:53,280 WARN  [cloud.resource.ServerResourceBase]
> (main:null) (logid:) Incorrect details for private Nic during
> initialization of ServerResourceBase
> 2019-05-27 17:43:53,281 ERROR [cloud.agent.AgentShell] (main:null) (logid:)
> Unable to start agent: Unable to configure LibvirtComputingResource
>
> can someone please help with this?
>
> Regards,
> Alejandro
>


Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Nux,

there is no way to set it for NVME as it only has [none] option in
/sys/block/nvme0n1/queue/scheduler

Setting any scheduler for VM volume doesn't improve a thing.

пт, 17 мая 2019 г., 20:21 Nux! :

> What happens when you set deadline scheduler in both HV and guest?
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Ivan Kudryavtsev" 
> > To: "users" , "dev" <
> d...@cloudstack.apache.org>
> > Sent: Friday, 17 May, 2019 14:16:31
> > Subject: Re: Poor NVMe Performance with KVM
>
> > BTW, You may think that the improvement is achieved by caching, but I
> clear
> > the cache with
> > sync & echo 3 > /proc/sys/vm/drop_caches
> >
> > So, can't claim for sure, need other opinion, but looks like for NVMe,
> > writethrough must be used if you want high IO rate. At least with Intel
> > p4500.
> >
> >
> > пт, 17 мая 2019 г., 20:04 Ivan Kudryavtsev :
> >
> >> Well, just FYI, I changed cache_mode from NULL (none), to writethrough
> >> directly in DB and the performance boosted greatly. It may be an
> important
> >> feature for NVME drives.
> >>
> >> Currently, on 4.11, the user can set cache-mode for disk offerings, but
> >> cannot for service offerings, which are translated to cache=none
> >> corresponding disk offerings.
> >>
> >> The only way is to use SQL to change that for root disk disk offerings.
> >> CreateServiceOffering API doesn't support cache mode. It can be a
> serious
> >> limitation for NVME users, because by default they could meet poor
> >> read/write performance.
> >>
> >> пт, 17 мая 2019 г., 19:30 Ivan Kudryavtsev :
> >>
> >>> Darius, thanks for your participation,
> >>>
> >>> first, I used 4.14 kernel which is the default one for my cluster.
> Next,
> >>> switched to 4.15 with dist-upgrade.
> >>>
> >>> Do you have an idea how to turn on amount of queues for virtio-scsi
> with
> >>> Cloudstack?
> >>>
> >>> пт, 17 мая 2019 г., 19:26 Darius Kasparavičius :
> >>>
> >>>> Hi,
> >>>>
> >>>> I can see a few issues with your xml file. You can try using "queues"
> >>>> inside your disk definitions. This should help a little, not sure by
> >>>> how much for your case, but for my specific it went up by almost the
> >>>> number of queues. Also try cache directsync or writethrough. You
> >>>> should switch kernel if bugs are still there with 4.15 kernel.
> >>>>
> >>>> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev
> >>>>  wrote:
> >>>> >
> >>>> > Hello, colleagues.
> >>>> >
> >>>> > Hope, someone could help me. I just deployed a new VM host with
> Intel
> >>>> P4500
> >>>> > local storage NVMe drive.
> >>>> >
> >>>> > From Hypervisor host I can get expected performance, 200K RIOPS,
> 3GBs
> >>>> with
> >>>> > FIO, write performance is also high as expected.
> >>>> >
> >>>> > I've created a new KVM VM Service offering with virtio-scsi
> controller
> >>>> > (tried virtio as well) and VM is deployed. Now I try to benchmark it
> >>>> with
> >>>> > FIO. Results are very strange:
> >>>> >
> >>>> > 1. Read/Write with large blocks (1M) shows expected performance (my
> >>>> limits
> >>>> > are R=1000/W=500 MBs).
> >>>> >
> >>>> > 2. Write with direct=0 leads to expected 50K IOPS, while write with
> >>>> > direct=1 leads to very moderate 2-3K IOPS.
> >>>> >
> >>>> > 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
> >>>> >
> >>>> > During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0%
> >>>> which is
> >>>> > strange.
> >>>> >
> >>>> > So, basically, from inside VM my NVMe works very slow when small
> IOPS
> >>>> are
> >>>> > executed. From the host, it works great.
> >>>> >
> >>>> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark.
> Read
> >>>> > performance is nice. Maybe someone managed to use NVME

Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
BTW, You may think that the improvement is achieved by caching, but I clear
the cache with
sync & echo 3 > /proc/sys/vm/drop_caches

So, can't claim for sure, need other opinion, but looks like for NVMe,
writethrough must be used if you want high IO rate. At least with Intel
p4500.


пт, 17 мая 2019 г., 20:04 Ivan Kudryavtsev :

> Well, just FYI, I changed cache_mode from NULL (none), to writethrough
> directly in DB and the performance boosted greatly. It may be an important
> feature for NVME drives.
>
> Currently, on 4.11, the user can set cache-mode for disk offerings, but
> cannot for service offerings, which are translated to cache=none
> corresponding disk offerings.
>
> The only way is to use SQL to change that for root disk disk offerings.
> CreateServiceOffering API doesn't support cache mode. It can be a serious
> limitation for NVME users, because by default they could meet poor
> read/write performance.
>
> пт, 17 мая 2019 г., 19:30 Ivan Kudryavtsev :
>
>> Darius, thanks for your participation,
>>
>> first, I used 4.14 kernel which is the default one for my cluster. Next,
>> switched to 4.15 with dist-upgrade.
>>
>> Do you have an idea how to turn on amount of queues for virtio-scsi with
>> Cloudstack?
>>
>> пт, 17 мая 2019 г., 19:26 Darius Kasparavičius :
>>
>>> Hi,
>>>
>>> I can see a few issues with your xml file. You can try using "queues"
>>> inside your disk definitions. This should help a little, not sure by
>>> how much for your case, but for my specific it went up by almost the
>>> number of queues. Also try cache directsync or writethrough. You
>>> should switch kernel if bugs are still there with 4.15 kernel.
>>>
>>> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev
>>>  wrote:
>>> >
>>> > Hello, colleagues.
>>> >
>>> > Hope, someone could help me. I just deployed a new VM host with Intel
>>> P4500
>>> > local storage NVMe drive.
>>> >
>>> > From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs
>>> with
>>> > FIO, write performance is also high as expected.
>>> >
>>> > I've created a new KVM VM Service offering with virtio-scsi controller
>>> > (tried virtio as well) and VM is deployed. Now I try to benchmark it
>>> with
>>> > FIO. Results are very strange:
>>> >
>>> > 1. Read/Write with large blocks (1M) shows expected performance (my
>>> limits
>>> > are R=1000/W=500 MBs).
>>> >
>>> > 2. Write with direct=0 leads to expected 50K IOPS, while write with
>>> > direct=1 leads to very moderate 2-3K IOPS.
>>> >
>>> > 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
>>> >
>>> > During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0%
>>> which is
>>> > strange.
>>> >
>>> > So, basically, from inside VM my NVMe works very slow when small IOPS
>>> are
>>> > executed. From the host, it works great.
>>> >
>>> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
>>> > performance is nice. Maybe someone managed to use NVME with KVM with
>>> small
>>> > IOPS?
>>> >
>>> > The filesystem is XFS, previously tried with EXT4 - results are the
>>> same.
>>> >
>>> > This is the part of VM XML definition generated by CloudStack:
>>> >
>>> >   
>>> > /usr/bin/kvm-spice
>>> > 
>>> >   
>>> >   >> > file='/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934'/>
>>> >   
>>> > 
>>> > >> > file='/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef'/>
>>> > 
>>> >   
>>> >   
>>> >   
>>> > 1048576000
>>> > 524288000
>>> > 10
>>> > 5
>>> >   
>>> >   6809dbd04a1540149322
>>> >   
>>> >   >> unit='0'/>
>>> > 
>>> > 
>>> >   
>>> >   
>>> >   
>>> >   
>>> >   
>>> >   >> unit='0'/>
>>> > 
>>> > 
>>> >   
>>> >   >> > function='0x0'/>
>>> > 
>>> >
>>> > So, what I see now, is that it works slower than couple of two Samsung
>>> 960
>>> > PRO which is extremely strange.
>>> >
>>> > Thanks in advance.
>>> >
>>> >
>>> > --
>>> > With best regards, Ivan Kudryavtsev
>>> > Bitworks LLC
>>> > Cell RU: +7-923-414-1515
>>> > Cell USA: +1-201-257-1512
>>> > WWW: http://bitworks.software/ <http://bw-sw.com/>
>>>
>>


Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Well, just FYI, I changed cache_mode from NULL (none), to writethrough
directly in DB and the performance boosted greatly. It may be an important
feature for NVME drives.

Currently, on 4.11, the user can set cache-mode for disk offerings, but
cannot for service offerings, which are translated to cache=none
corresponding disk offerings.

The only way is to use SQL to change that for root disk disk offerings.
CreateServiceOffering API doesn't support cache mode. It can be a serious
limitation for NVME users, because by default they could meet poor
read/write performance.

пт, 17 мая 2019 г., 19:30 Ivan Kudryavtsev :

> Darius, thanks for your participation,
>
> first, I used 4.14 kernel which is the default one for my cluster. Next,
> switched to 4.15 with dist-upgrade.
>
> Do you have an idea how to turn on amount of queues for virtio-scsi with
> Cloudstack?
>
> пт, 17 мая 2019 г., 19:26 Darius Kasparavičius :
>
>> Hi,
>>
>> I can see a few issues with your xml file. You can try using "queues"
>> inside your disk definitions. This should help a little, not sure by
>> how much for your case, but for my specific it went up by almost the
>> number of queues. Also try cache directsync or writethrough. You
>> should switch kernel if bugs are still there with 4.15 kernel.
>>
>> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev
>>  wrote:
>> >
>> > Hello, colleagues.
>> >
>> > Hope, someone could help me. I just deployed a new VM host with Intel
>> P4500
>> > local storage NVMe drive.
>> >
>> > From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs
>> with
>> > FIO, write performance is also high as expected.
>> >
>> > I've created a new KVM VM Service offering with virtio-scsi controller
>> > (tried virtio as well) and VM is deployed. Now I try to benchmark it
>> with
>> > FIO. Results are very strange:
>> >
>> > 1. Read/Write with large blocks (1M) shows expected performance (my
>> limits
>> > are R=1000/W=500 MBs).
>> >
>> > 2. Write with direct=0 leads to expected 50K IOPS, while write with
>> > direct=1 leads to very moderate 2-3K IOPS.
>> >
>> > 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
>> >
>> > During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0% which
>> is
>> > strange.
>> >
>> > So, basically, from inside VM my NVMe works very slow when small IOPS
>> are
>> > executed. From the host, it works great.
>> >
>> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
>> > performance is nice. Maybe someone managed to use NVME with KVM with
>> small
>> > IOPS?
>> >
>> > The filesystem is XFS, previously tried with EXT4 - results are the
>> same.
>> >
>> > This is the part of VM XML definition generated by CloudStack:
>> >
>> >   
>> > /usr/bin/kvm-spice
>> > 
>> >   
>> >   > > file='/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934'/>
>> >   
>> > 
>> > > > file='/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef'/>
>> > 
>> >   
>> >   
>> >   
>> > 1048576000
>> > 524288000
>> > 10
>> > 5
>> >   
>> >   6809dbd04a1540149322
>> >   
>> >   
>> > 
>> > 
>> >   
>> >   
>> >   
>> >   
>> >   
>> >   
>> > 
>> > 
>> >   
>> >   > > function='0x0'/>
>> > 
>> >
>> > So, what I see now, is that it works slower than couple of two Samsung
>> 960
>> > PRO which is extremely strange.
>> >
>> > Thanks in advance.
>> >
>> >
>> > --
>> > With best regards, Ivan Kudryavtsev
>> > Bitworks LLC
>> > Cell RU: +7-923-414-1515
>> > Cell USA: +1-201-257-1512
>> > WWW: http://bitworks.software/ <http://bw-sw.com/>
>>
>


Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Darius, thanks for your participation,

first, I used 4.14 kernel which is the default one for my cluster. Next,
switched to 4.15 with dist-upgrade.

Do you have an idea how to turn on amount of queues for virtio-scsi with
Cloudstack?

пт, 17 мая 2019 г., 19:26 Darius Kasparavičius :

> Hi,
>
> I can see a few issues with your xml file. You can try using "queues"
> inside your disk definitions. This should help a little, not sure by
> how much for your case, but for my specific it went up by almost the
> number of queues. Also try cache directsync or writethrough. You
> should switch kernel if bugs are still there with 4.15 kernel.
>
> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev
>  wrote:
> >
> > Hello, colleagues.
> >
> > Hope, someone could help me. I just deployed a new VM host with Intel
> P4500
> > local storage NVMe drive.
> >
> > From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs
> with
> > FIO, write performance is also high as expected.
> >
> > I've created a new KVM VM Service offering with virtio-scsi controller
> > (tried virtio as well) and VM is deployed. Now I try to benchmark it with
> > FIO. Results are very strange:
> >
> > 1. Read/Write with large blocks (1M) shows expected performance (my
> limits
> > are R=1000/W=500 MBs).
> >
> > 2. Write with direct=0 leads to expected 50K IOPS, while write with
> > direct=1 leads to very moderate 2-3K IOPS.
> >
> > 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
> >
> > During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0% which
> is
> > strange.
> >
> > So, basically, from inside VM my NVMe works very slow when small IOPS are
> > executed. From the host, it works great.
> >
> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
> > performance is nice. Maybe someone managed to use NVME with KVM with
> small
> > IOPS?
> >
> > The filesystem is XFS, previously tried with EXT4 - results are the same.
> >
> > This is the part of VM XML definition generated by CloudStack:
> >
> >   
> > /usr/bin/kvm-spice
> > 
> >   
> >> file='/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934'/>
> >   
> > 
> >  > file='/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef'/>
> > 
> >   
> >   
> >   
> > 1048576000
> > 524288000
> > 10
> > 5
> >   
> >   6809dbd04a1540149322
> >   
> >   
> > 
> > 
> >   
> >   
> >   
> >   
> >   
> >   
> > 
> > 
> >   
> >> function='0x0'/>
> > 
> >
> > So, what I see now, is that it works slower than couple of two Samsung
> 960
> > PRO which is extremely strange.
> >
> > Thanks in advance.
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
>


Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Host is Dell r620 with Dual e5-2690/256GB 1333 DDR3.

пт, 17 мая 2019 г., 19:22 Ivan Kudryavtsev :

> Nux,
>
> I use Ubuntu 16.04 with "none" scheduler and the latest kernel 4.15. Guest
> is Ubuntu 18.04 with Noop scheduler for scsi-virtio  and "none" for virtio.
>
> Thanks.
>
> пт, 17 мая 2019 г., 19:18 Nux! :
>
>> Hi,
>>
>> What HV is that? CentOS? Are you using the right tuned profile? What
>> about in the guest? Which IO scheduler?
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> - Original Message -
>> > From: "Ivan Kudryavtsev" 
>> > To: "users" 
>> > Sent: Friday, 17 May, 2019 10:13:50
>> > Subject: Poor NVMe Performance with KVM
>>
>> > Hello, colleagues.
>> >
>> > Hope, someone could help me. I just deployed a new VM host with Intel
>> P4500
>> > local storage NVMe drive.
>> >
>> > From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs
>> with
>> > FIO, write performance is also high as expected.
>> >
>> > I've created a new KVM VM Service offering with virtio-scsi controller
>> > (tried virtio as well) and VM is deployed. Now I try to benchmark it
>> with
>> > FIO. Results are very strange:
>> >
>> > 1. Read/Write with large blocks (1M) shows expected performance (my
>> limits
>> > are R=1000/W=500 MBs).
>> >
>> > 2. Write with direct=0 leads to expected 50K IOPS, while write with
>> > direct=1 leads to very moderate 2-3K IOPS.
>> >
>> > 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
>> >
>> > During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0% which
>> is
>> > strange.
>> >
>> > So, basically, from inside VM my NVMe works very slow when small IOPS
>> are
>> > executed. From the host, it works great.
>> >
>> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
>> > performance is nice. Maybe someone managed to use NVME with KVM with
>> small
>> > IOPS?
>> >
>> > The filesystem is XFS, previously tried with EXT4 - results are the
>> same.
>> >
>> > This is the part of VM XML definition generated by CloudStack:
>> >
>> >  
>> >/usr/bin/kvm-spice
>> >
>> >  
>> >  > > file='/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934'/>
>> >  
>> >
>> >    > > file='/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef'/>
>> >
>> >  
>> >  
>> >  
>> >1048576000
>> >524288000
>> >10
>> >5
>> >  
>> >  6809dbd04a1540149322
>> >  
>> >  
>> >
>> >
>> >  
>> >  
>> >  
>> >  
>> >  
>> >  
>> >
>> >
>> >  
>> >  > > function='0x0'/>
>> >
>> >
>> > So, what I see now, is that it works slower than couple of two Samsung
>> 960
>> > PRO which is extremely strange.
>> >
>> > Thanks in advance.
>> >
>> >
>> > --
>> > With best regards, Ivan Kudryavtsev
>> > Bitworks LLC
>> > Cell RU: +7-923-414-1515
>> > Cell USA: +1-201-257-1512
>> > WWW: http://bitworks.software/ <http://bw-sw.com/>
>>
>


Re: Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Nux,

I use Ubuntu 16.04 with "none" scheduler and the latest kernel 4.15. Guest
is Ubuntu 18.04 with Noop scheduler for scsi-virtio  and "none" for virtio.

Thanks.

пт, 17 мая 2019 г., 19:18 Nux! :

> Hi,
>
> What HV is that? CentOS? Are you using the right tuned profile? What about
> in the guest? Which IO scheduler?
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Ivan Kudryavtsev" 
> > To: "users" 
> > Sent: Friday, 17 May, 2019 10:13:50
> > Subject: Poor NVMe Performance with KVM
>
> > Hello, colleagues.
> >
> > Hope, someone could help me. I just deployed a new VM host with Intel
> P4500
> > local storage NVMe drive.
> >
> > From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs
> with
> > FIO, write performance is also high as expected.
> >
> > I've created a new KVM VM Service offering with virtio-scsi controller
> > (tried virtio as well) and VM is deployed. Now I try to benchmark it with
> > FIO. Results are very strange:
> >
> > 1. Read/Write with large blocks (1M) shows expected performance (my
> limits
> > are R=1000/W=500 MBs).
> >
> > 2. Write with direct=0 leads to expected 50K IOPS, while write with
> > direct=1 leads to very moderate 2-3K IOPS.
> >
> > 3. Read with direct=0, direct=1 both lead to 3000 IOPS.
> >
> > During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0% which
> is
> > strange.
> >
> > So, basically, from inside VM my NVMe works very slow when small IOPS are
> > executed. From the host, it works great.
> >
> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
> > performance is nice. Maybe someone managed to use NVME with KVM with
> small
> > IOPS?
> >
> > The filesystem is XFS, previously tried with EXT4 - results are the same.
> >
> > This is the part of VM XML definition generated by CloudStack:
> >
> >  
> >/usr/bin/kvm-spice
> >
> >  
> >   > file='/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934'/>
> >  
> >
> > > file='/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef'/>
> >
> >  
> >  
> >  
> >1048576000
> >524288000
> >10
> >5
> >  
> >  6809dbd04a1540149322
> >  
> >  
> >
> >
> >  
> >  
> >  
> >  
> >  
> >  
> >
> >
> >  
> >   > function='0x0'/>
> >
> >
> > So, what I see now, is that it works slower than couple of two Samsung
> 960
> > PRO which is extremely strange.
> >
> > Thanks in advance.
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
>


Poor NVMe Performance with KVM

2019-05-17 Thread Ivan Kudryavtsev
Hello, colleagues.

Hope, someone could help me. I just deployed a new VM host with Intel P4500
local storage NVMe drive.

>From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs with
FIO, write performance is also high as expected.

I've created a new KVM VM Service offering with virtio-scsi controller
(tried virtio as well) and VM is deployed. Now I try to benchmark it with
FIO. Results are very strange:

1. Read/Write with large blocks (1M) shows expected performance (my limits
are R=1000/W=500 MBs).

2. Write with direct=0 leads to expected 50K IOPS, while write with
direct=1 leads to very moderate 2-3K IOPS.

3. Read with direct=0, direct=1 both lead to 3000 IOPS.

During the benchmark I see VM IOWAIT=20%, while host IOWAIT is 0% which is
strange.

So, basically, from inside VM my NVMe works very slow when small IOPS are
executed. From the host, it works great.

I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Read
performance is nice. Maybe someone managed to use NVME with KVM with small
IOPS?

The filesystem is XFS, previously tried with EXT4 - results are the same.

This is the part of VM XML definition generated by CloudStack:

  
/usr/bin/kvm-spice

  
  
  



  
  
  
1048576000
524288000
10
5
  
  6809dbd04a1540149322
  
  


  
  
  
  
  
  


  
  


So, what I see now, is that it works slower than couple of two Samsung 960
PRO which is extremely strange.

Thanks in advance.


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: issue with system vm template not downloading

2019-05-10 Thread Ivan Kudryavtsev
Richard,

1. About bridges, just check traffic goes correctly between HV hosts and
SS. All your hosts should be able to mount SS.
2. About your HV/Storage topology.

1st. Cloudstack doesn't balance the storages. While the first chosen
storage is capable to deploy the image, it will be used. You will not able
to balance volumes between them.
2nd. Every HV will mount every storage. If HV fails (which is probably
happen more frequently than storage) it will cause __all the__ hosts will
meet the problem with the NFS share and kick reboot, so, all your cloud
will reboot.
Frankly, it's the worst topology possible. What I recommend is to switch
either to Ceph or Gluster if you want shared storage, split all the hosts
to separate clusters or use LOCAL STORAGE instead of NFS storage, so your
VMS will use local storage. Later if you wish to move VMs between hosts,
you can do it manually.

Best wishes





пт, 10 мая 2019 г. в 20:14, Richard Persaud :

> Hi Ivan,
>
> Thanks for the info.
>
> Will you clarify what I should be looking for in my bridge set up? It's
> fairly standard other then setting the MTU to 9000.
>
> The host/storage devices are using hardware RAID5. And all hypervisors are
> capable of mounting any of the NFS shares.
>
> Will you give me some detail on what you mean when you say using native
> RAID is a bad idea? Why is that and what is the recommended way to set up?
>
> Thanks in advance
>
>
> Regards,
> Richard Persaud
> Sys Spec, Info Security Del | Macy's, Inc.
> 5985 State Bridge Rd. | Johns Creek, GA 30097
> Office: 678-474-2357
> https://macyspartners.com/PublishingImages/MakeLifeShineBrighter.png
>
> 
> From: Ivan Kudryavtsev 
> Sent: Thursday, May 9, 2019 21:57
> To: users
> Subject: Re: issue with system vm template not downloading
>
> ⚠ EXT MSG:
>
> Richard, the most probable problem is with bridge devices. Management
> server doesn't care about systemvm. The only unit which cares - ssvm and
> hypervisor. Also, if you are using naive RAID/NFS within one cluster when
> any HV can mount any storage (mesh) it's extremely bad idea. You will get s
> lot of reboots if any of node meets outage. If you have DRBD or Gluster,
> then, it's fine.
>
> пт, 10 мая 2019 г., 6:32 Richard Persaud  <mailto:richard.pers...@macys.com>>:
>
> > Hello,
> >
> > Our setup:
> > 4.11 on Ubuntu 16.04 LTS. One management server, eight compute/storage
> > hosts (dual function).
> > NFS for storage.
> > No firewall in between the mgmt server and the hosts.
> > Management and storage traffic run over the same VLAN (same network).
> >
> > We are having an issue with the system vm template not downloading.
> > We have seen this issue on multiple occasions
> >
> > "Timeout waiting for response from storage host"
> >
> > It does not give any further information.
> >
> > The management server can successfully contact and mount the NFS shares
> > from all the compute/storage hosts.
> >
> > How can I determine which storage host is timing out? Why is it timing
> out?
> >
> > Regards,
> >
> > Richard Persaud
> >
>
> * This is an EXTERNAL EMAIL. Stop and think before clicking a link or
> opening attachments.
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: issue with system vm template not downloading

2019-05-09 Thread Ivan Kudryavtsev
Richard, the most probable problem is with bridge devices. Management
server doesn't care about systemvm. The only unit which cares - ssvm and
hypervisor. Also, if you are using naive RAID/NFS within one cluster when
any HV can mount any storage (mesh) it's extremely bad idea. You will get s
lot of reboots if any of node meets outage. If you have DRBD or Gluster,
then, it's fine.

пт, 10 мая 2019 г., 6:32 Richard Persaud :

> Hello,
>
> Our setup:
> 4.11 on Ubuntu 16.04 LTS. One management server, eight compute/storage
> hosts (dual function).
> NFS for storage.
> No firewall in between the mgmt server and the hosts.
> Management and storage traffic run over the same VLAN (same network).
>
> We are having an issue with the system vm template not downloading.
> We have seen this issue on multiple occasions
>
> "Timeout waiting for response from storage host"
>
> It does not give any further information.
>
> The management server can successfully contact and mount the NFS shares
> from all the compute/storage hosts.
>
> How can I determine which storage host is timing out? Why is it timing out?
>
> Regards,
>
> Richard Persaud
>


Re: Fast access to specific templates vs. secondary storage

2019-04-16 Thread Ivan Kudryavtsev
Peter,

once you provisioned a single VM for certain template, it's no longer
copied from ss to primary, then boot is almost instant.

In our case, I can read from SS 800 MBs and write to primary on 1GBs,
template copy for normal templates takes only 2-3 seconds. Again, when it
is copied once, other VMs just use it.


вт, 16 апр. 2019 г., 7:19 Andrija Panic :

> Hi Peter,
>
> initial template copy process from Secondary to Primary storage pool should
> be decently fast, since you are doing sequential IO read from Secondary NFS
> and doing sequential IO write to Primary Storage - that should be very fast
> in general.
>
> No way around this process at the moment.
>
> Best,
> Andrija
>
> On Tue, 16 Apr 2019 at 13:08, Nux!  wrote:
>
> > Hi,
> >
> > It should already be very fast. What kind of storage is it you're using?
> >
> > With local/nfs storage the VMs get spawned from a template, as a sort of
> > writable snapshots, it's very quick once you deployed it at least once
> (so
> > the template lands on the primary storage).
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > - Original Message -
> > > From: "peter muryshkin" 
> > > To: "users" 
> > > Sent: Tuesday, 16 April, 2019 08:23:55
> > > Subject: Fast access to specific templates vs. secondary storage
> >
> > > Hi all,
> > >
> > > naturally the secondary storage is slower than the primary one.
> > >
> > > Now what if you need some VMs to load very fast i.e. for CI/CD
> > environment
> > > purposes (that is, a build/test iterations requires one or more one-way
> > fresh
> > > VMs?)
> > >
> > > Is there currently a way to have some VM templates in the primary
> > storage?
> > >
> > > kind regards
> > > Peter
> >
>
>
> --
>
> Andrija Panić
>


Re: CloudStack 4.11.2 SS problem

2019-04-15 Thread Ivan Kudryavtsev
Well, we don't remove templates usually, just rename them and reset public,
featured flags.

I'll check, but suppose, that template api removal call shouldn't produce
the case like this? How this could be achieved thru API, when storage_ref
is removed, but template is in place?

The records for certain templates are absent in template_store_ref, while
others are just fine... it looks like a very serious bug for regular users,
who don't manage templates and ISO programmatically.

пн, 15 апр. 2019 г., 12:21 Andrija Panic :

> Assuming it's not Assange :), perhaps check with teammates if any changes
> on these templates were done - are your records completely missing or just
> altered in bad way ?
>
> Can you double check the API log for any delete template API calls ?
>
>
> On Mon, 15 Apr 2019 at 17:39, Ivan Kudryavtsev 
> wrote:
>
> > Andrija,
> >
> > yes, I have the case with missing records in 'template_store_ref'. What I
> > don't get is how it could happen...
> >
> > пн, 15 апр. 2019 г. в 11:24, Andrija Panic :
> >
> > > Hi Ivan,
> > >
> > > is it possible that your DB got somehow corrupted or that you are
> missing
> > > records in template_store_ref etc  - this might be the reason why SSVM
> is
> > > trying to download templates again - if you check the logs for
> > > non-problematic templates, you will see something like "template
> already
> > on
> > > store this and that, no need to download again, skipping". For the rest
> > > (which are considered not downloaded), it will try to download again
> from
> > > the URL in the main vm_template table.
> > >
> > > Can you also check for the records on the template_spool_ref (Primary
> > > Storage) - I assume these might be OK, ca you spin new VM from an
> > existing
> > > (problematic) template ?
> > >
> > > Behavior (from your second email) is expected - same kind of errors you
> > > would get if you just added another Secondary Storage to your
> CloudStack
> > > setup, but original URL is unavailable (you could play with hacking MD5
> > in
> > > DB, but that is not a solution at all).
> > >
> > > As for the restoration of the template.properties - do you have a
> backup
> > ?
> > >
> > > Best,
> > > Andrija
> > >
> > > On Mon, 15 Apr 2019 at 16:26, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com
> > >
> > > wrote:
> > >
> > > > To follow up. When SSVM boots it tries to redownload all the
> templates
> > > from
> > > > original sources this leads to next oucomes:
> > > > - if the source is not available, the result is:
> > > > No route to host (Host unreachable) - If the template is changed on
> > > source:
> > > > then it leads to MD5 sum error.
> > > >
> > > > Any ideas, why SSVM tries to download all the templates on SSVM
> again?
> > > > Never seen that before.
> > > >
> > > >
> > > > пн, 15 апр. 2019 г. в 09:40, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com
> > > >:
> > > >
> > > > > Hello, community.
> > > > >
> > > > > Today, We've met the problem with ACS SS, which looks like a
> critical
> > > > > error. In some point of time, new templates stopped to upload and
> the
> > > old
> > > > > ones were unable to be removed.
> > > > >
> > > > > After the SSVM recreation, I've met the situation when some
> templates
> > > are
> > > > > not activated and have their "template.properties" size set to 0.
> > > > >
> > > > > More to add, certain already working templates were tried to be
> > > > > redownloaded and got errors like:
> > > > > Failed post download script: checksum
> > > > > "{MD5}9f8c94ed7e4b19a78d4f0e3fc406d81b" didn't match the given
> value,
> > > > > "{MD5}7eed347f4cc7e66f55e4f668cd9a5151"
> > > > > I've checked the following:
> > > > > - no lack of spare space on SS;
> > > > > - no problems with management servers in the last months;
> > > > > - no problems with SSVM.
> > > > >
> > > > > It's ACS 4.11.2, all VMs are working, of course as templates are
> > copied
> > > > to
> > > > > primary, but we've lost almost half of the template repository.
> > > > >
> > > > > Is there a way to recreate "template.properties" from DB or another
> > > > > approach? All the templates are still in place, but they are not
> > > > activated
> > > > > upon SSVM start.
> > > > >
> > > > > Many thanks.
> > > > >
> > > > >
> > > > > --
> > > > > With best regards, Ivan Kudryavtsev
> > > > > Bitworks LLC
> > > > > Cell RU: +7-923-414-1515
> > > > > Cell USA: +1-201-257-1512
> > > > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > > > >
> > > > >
> > > >
> > > > --
> > > > With best regards, Ivan Kudryavtsev
> > > > Bitworks LLC
> > > > Cell RU: +7-923-414-1515
> > > > Cell USA: +1-201-257-1512
> > > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
>
>
> --
>
> Andrija Panić
>


Re: CloudStack 4.11.2 SS problem

2019-04-15 Thread Ivan Kudryavtsev
Andrija,

yes, I have the case with missing records in 'template_store_ref'. What I
don't get is how it could happen...

пн, 15 апр. 2019 г. в 11:24, Andrija Panic :

> Hi Ivan,
>
> is it possible that your DB got somehow corrupted or that you are missing
> records in template_store_ref etc  - this might be the reason why SSVM is
> trying to download templates again - if you check the logs for
> non-problematic templates, you will see something like "template already on
> store this and that, no need to download again, skipping". For the rest
> (which are considered not downloaded), it will try to download again from
> the URL in the main vm_template table.
>
> Can you also check for the records on the template_spool_ref (Primary
> Storage) - I assume these might be OK, ca you spin new VM from an existing
> (problematic) template ?
>
> Behavior (from your second email) is expected - same kind of errors you
> would get if you just added another Secondary Storage to your CloudStack
> setup, but original URL is unavailable (you could play with hacking MD5 in
> DB, but that is not a solution at all).
>
> As for the restoration of the template.properties - do you have a backup ?
>
> Best,
> Andrija
>
> On Mon, 15 Apr 2019 at 16:26, Ivan Kudryavtsev 
> wrote:
>
> > To follow up. When SSVM boots it tries to redownload all the templates
> from
> > original sources this leads to next oucomes:
> > - if the source is not available, the result is:
> > No route to host (Host unreachable) - If the template is changed on
> source:
> > then it leads to MD5 sum error.
> >
> > Any ideas, why SSVM tries to download all the templates on SSVM again?
> > Never seen that before.
> >
> >
> > пн, 15 апр. 2019 г. в 09:40, Ivan Kudryavtsev  >:
> >
> > > Hello, community.
> > >
> > > Today, We've met the problem with ACS SS, which looks like a critical
> > > error. In some point of time, new templates stopped to upload and the
> old
> > > ones were unable to be removed.
> > >
> > > After the SSVM recreation, I've met the situation when some templates
> are
> > > not activated and have their "template.properties" size set to 0.
> > >
> > > More to add, certain already working templates were tried to be
> > > redownloaded and got errors like:
> > > Failed post download script: checksum
> > > "{MD5}9f8c94ed7e4b19a78d4f0e3fc406d81b" didn't match the given value,
> > > "{MD5}7eed347f4cc7e66f55e4f668cd9a5151"
> > > I've checked the following:
> > > - no lack of spare space on SS;
> > > - no problems with management servers in the last months;
> > > - no problems with SSVM.
> > >
> > > It's ACS 4.11.2, all VMs are working, of course as templates are copied
> > to
> > > primary, but we've lost almost half of the template repository.
> > >
> > > Is there a way to recreate "template.properties" from DB or another
> > > approach? All the templates are still in place, but they are not
> > activated
> > > upon SSVM start.
> > >
> > > Many thanks.
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks LLC
> > > Cell RU: +7-923-414-1515
> > > Cell USA: +1-201-257-1512
> > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > >
> > >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
>
>
> --
>
> Andrija Panić
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: CloudStack 4.11.2 SS problem

2019-04-15 Thread Ivan Kudryavtsev
To follow up. When SSVM boots it tries to redownload all the templates from
original sources this leads to next oucomes:
- if the source is not available, the result is:
No route to host (Host unreachable) - If the template is changed on source:
then it leads to MD5 sum error.

Any ideas, why SSVM tries to download all the templates on SSVM again?
Never seen that before.


пн, 15 апр. 2019 г. в 09:40, Ivan Kudryavtsev :

> Hello, community.
>
> Today, We've met the problem with ACS SS, which looks like a critical
> error. In some point of time, new templates stopped to upload and the old
> ones were unable to be removed.
>
> After the SSVM recreation, I've met the situation when some templates are
> not activated and have their "template.properties" size set to 0.
>
> More to add, certain already working templates were tried to be
> redownloaded and got errors like:
> Failed post download script: checksum
> "{MD5}9f8c94ed7e4b19a78d4f0e3fc406d81b" didn't match the given value,
> "{MD5}7eed347f4cc7e66f55e4f668cd9a5151"
> I've checked the following:
> - no lack of spare space on SS;
> - no problems with management servers in the last months;
> - no problems with SSVM.
>
> It's ACS 4.11.2, all VMs are working, of course as templates are copied to
> primary, but we've lost almost half of the template repository.
>
> Is there a way to recreate "template.properties" from DB or another
> approach? All the templates are still in place, but they are not activated
> upon SSVM start.
>
> Many thanks.
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>
>

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


CloudStack 4.11.2 SS problem

2019-04-15 Thread Ivan Kudryavtsev
Hello, community.

Today, We've met the problem with ACS SS, which looks like a critical
error. In some point of time, new templates stopped to upload and the old
ones were unable to be removed.

After the SSVM recreation, I've met the situation when some templates are
not activated and have their "template.properties" size set to 0.

More to add, certain already working templates were tried to be
redownloaded and got errors like:
Failed post download script: checksum
"{MD5}9f8c94ed7e4b19a78d4f0e3fc406d81b" didn't match the given value,
"{MD5}7eed347f4cc7e66f55e4f668cd9a5151"
I've checked the following:
- no lack of spare space on SS;
- no problems with management servers in the last months;
- no problems with SSVM.

It's ACS 4.11.2, all VMs are working, of course as templates are copied to
primary, but we've lost almost half of the template repository.

Is there a way to recreate "template.properties" from DB or another
approach? All the templates are still in place, but they are not activated
upon SSVM start.

Many thanks.


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: Packer and Cloudstack

2019-04-05 Thread Ivan Kudryavtsev
Hi, Swen.
We use Packer for building templates (Ubuntu 16, 18, CentOS 7, Debian 9).
Please contact me directly and I share what you need.

пт, 5 апр. 2019 г. в 10:54, Swen - swen.io :

> Hi all,
>
> does anybody have experience with Packer (www.packer.io)? I want to create
> templates using Packer and Cloudstack is supported. But I cannot find a way
> to use a preseed file to create a Debian/Ubuntu template.
> Thanks for any help!
>
> cu Swen
>
>
>

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: How to re-create virtual router

2019-03-29 Thread Ivan Kudryavtsev
Investigate the stacktrace in logs, share all related when you'll find.

пт, 29 мар. 2019 г., 10:19 Fariborz Navidan :

> Not for me! I think my CS DB is inconsistent somehow! I cannot figure out
> which tables should I manipulate to fix it.
>
> On Fri, Mar 29, 2019 at 6:45 PM Ivan Kudryavtsev  >
> wrote:
>
> > VR deletion is OK for Basic Zone in 4.11.2, work normally, VR is created
> > automatically.
> >
> > пт, 29 мар. 2019 г., 10:04 Fariborz Navidan :
> >
> > > There should be inconsistency in DB. Because I did a wrong before and
> > have
> > > deleted all records in domain_router and router_network_ref tables
> > > manually. I thought it will cause CS to think network is not
> implemented
> > > yet and will re-implement but it didn't. Indeed it still shows in logs
> > > "Asking VirtualRouter to implement network...". "Setup" state of the
> > > network is preventing from re-creation of router. What can I chnage
> state
> > > of network to so that cause CS to create new router for it?
> > >
> > > Thanks
> > >
> > >
> > > On Fri, Mar 29, 2019 at 5:12 PM Fariborz Navidan <
> mdvlinqu...@gmail.com>
> > > wrote:
> > >
> > > > 4.11.2 with KVM.
> > > >
> > > > On Fri, Mar 29, 2019 at 5:08 PM Andrija Panic <
> andrija.pa...@gmail.com
> > >
> > > > wrote:
> > > >
> > > >> cloudstack version, hypervisor (version) ?
> > > >>
> > > >> On Fri, 29 Mar 2019 at 13:34, Fariborz Navidan <
> mdvlinqu...@gmail.com
> > >
> > > >> wrote:
> > > >>
> > > >> > I did marked router vm instance to Stopped and set remoed to NULL.
> > In
> > > >> > infrastructure overview it shows 1 virtual router but clicking on
> > > View,
> > > >> > shows no data. Then I restarted the network with clleanup, It said
> > > >> > [Datacenter:1] is unreachable and automatically marked the vm
> > instance
> > > >> to
> > > >> > destroyed and removed it (set removed field). But it did not
> create
> > > the
> > > >> > router again!
> > > >> >
> > > >> > On Fri, Mar 29, 2019 at 4:46 PM Thomas Joseph <
> > thomas.jo...@gmail.com
> > > >
> > > >> > wrote:
> > > >> >
> > > >> > > They running the restart network with cleanup=true
> > > >> > >
> > > >> > > On Fri, 29 Mar 2019, 10:39 am Fariborz Navidan, <
> > > >> mdvlinqu...@gmail.com>
> > > >> > > wrote:
> > > >> > >
> > > >> > > > It fails to restart networl. Log says it cannot find virtual
> > > router.
> > > >> > > Bellow
> > > >> > > > is the log:
> > > >> > > >
> > > >> > > > 2019-03-29 10:53:41,621 DEBUG [c.c.c.ClusterManagerImpl]
> > > >> > > > (Cluster-Heartbeat-1:ctx-277a1d9d) (logid:9902f1d7) Detected
> > > >> management
> > > >> > > > node left, id:8, nodeIP:178.33.230.41
> > > >> > > > 2019-03-29 10:53:41,621 INFO  [c.c.c.ClusterManagerImpl]
> > > >> > > > (Cluster-Heartbeat-1:ctx-277a1d9d) (logid:9902f1d7) Trying to
> > > >> connect
> > > >> > to
> > > >> > > > 178.33.230.41
> > > >> > > > 2019-03-29 10:53:41,621 INFO  [c.c.c.ClusterManagerImpl]
> > > >> > > > (Cluster-Heartbeat-1:ctx-277a1d9d) (logid:9902f1d7) Management
> > > node
> > > >> 8
> > > >> > is
> > > >> > > > detected inactive by timestamp but is pingable
> > > >> > > > 2019-03-29 10:53:42,526 DEBUG [c.c.a.ApiServlet]
> > > >> > > > (qtp788117692-21:ctx-219919fa) (logid:c3be28b0) ===START===
> > > >> > 2.190.177.97
> > > >> > > > -- GET
> > > >> > > >
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> >
> command=restartNetwork=json=6e644551-5aee-4c9b-a75f-134f544ee97c=false=false&_=1553853224533
> > > >> > > > 2019-03-29 10:53:42,528 DEBUG [c.c.a.ApiServer]
> > > >> > > > (qtp788117692-21:ctx-219919fa ctx-3b77ed02) (

Re: How to re-create virtual router

2019-03-29 Thread Ivan Kudryavtsev
VR deletion is OK for Basic Zone in 4.11.2, work normally, VR is created
automatically.

пт, 29 мар. 2019 г., 10:04 Fariborz Navidan :

> There should be inconsistency in DB. Because I did a wrong before and have
> deleted all records in domain_router and router_network_ref tables
> manually. I thought it will cause CS to think network is not implemented
> yet and will re-implement but it didn't. Indeed it still shows in logs
> "Asking VirtualRouter to implement network...". "Setup" state of the
> network is preventing from re-creation of router. What can I chnage state
> of network to so that cause CS to create new router for it?
>
> Thanks
>
>
> On Fri, Mar 29, 2019 at 5:12 PM Fariborz Navidan 
> wrote:
>
> > 4.11.2 with KVM.
> >
> > On Fri, Mar 29, 2019 at 5:08 PM Andrija Panic 
> > wrote:
> >
> >> cloudstack version, hypervisor (version) ?
> >>
> >> On Fri, 29 Mar 2019 at 13:34, Fariborz Navidan 
> >> wrote:
> >>
> >> > I did marked router vm instance to Stopped and set remoed to NULL. In
> >> > infrastructure overview it shows 1 virtual router but clicking on
> View,
> >> > shows no data. Then I restarted the network with clleanup, It said
> >> > [Datacenter:1] is unreachable and automatically marked the vm instance
> >> to
> >> > destroyed and removed it (set removed field). But it did not create
> the
> >> > router again!
> >> >
> >> > On Fri, Mar 29, 2019 at 4:46 PM Thomas Joseph  >
> >> > wrote:
> >> >
> >> > > They running the restart network with cleanup=true
> >> > >
> >> > > On Fri, 29 Mar 2019, 10:39 am Fariborz Navidan, <
> >> mdvlinqu...@gmail.com>
> >> > > wrote:
> >> > >
> >> > > > It fails to restart networl. Log says it cannot find virtual
> router.
> >> > > Bellow
> >> > > > is the log:
> >> > > >
> >> > > > 2019-03-29 10:53:41,621 DEBUG [c.c.c.ClusterManagerImpl]
> >> > > > (Cluster-Heartbeat-1:ctx-277a1d9d) (logid:9902f1d7) Detected
> >> management
> >> > > > node left, id:8, nodeIP:178.33.230.41
> >> > > > 2019-03-29 10:53:41,621 INFO  [c.c.c.ClusterManagerImpl]
> >> > > > (Cluster-Heartbeat-1:ctx-277a1d9d) (logid:9902f1d7) Trying to
> >> connect
> >> > to
> >> > > > 178.33.230.41
> >> > > > 2019-03-29 10:53:41,621 INFO  [c.c.c.ClusterManagerImpl]
> >> > > > (Cluster-Heartbeat-1:ctx-277a1d9d) (logid:9902f1d7) Management
> node
> >> 8
> >> > is
> >> > > > detected inactive by timestamp but is pingable
> >> > > > 2019-03-29 10:53:42,526 DEBUG [c.c.a.ApiServlet]
> >> > > > (qtp788117692-21:ctx-219919fa) (logid:c3be28b0) ===START===
> >> > 2.190.177.97
> >> > > > -- GET
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> command=restartNetwork=json=6e644551-5aee-4c9b-a75f-134f544ee97c=false=false&_=1553853224533
> >> > > > 2019-03-29 10:53:42,528 DEBUG [c.c.a.ApiServer]
> >> > > > (qtp788117692-21:ctx-219919fa ctx-3b77ed02) (logid:c3be28b0) CIDRs
> >> from
> >> > > > which account 'Acct[27cd01ef-3907-11e9-87ab-a4bf012ed1a6-admin]'
> is
> >> > > allowed
> >> > > > to perform API calls: 0.0.0.0/0,::/0
> >> > > > 2019-03-29 10:53:42,536 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
> >> > > > (API-Job-Executor-2:ctx-5a2a6bbe job-16321) (logid:b95430aa) Add
> >> > > job-16321
> >> > > > into job monitoring
> >> > > > 2019-03-29 10:53:42,539 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > > > (qtp788117692-21:ctx-219919fa ctx-3b77ed02) (logid:c3be28b0)
> submit
> >> > async
> >> > > > job-16321, details: AsyncJobVO {id:16321, userId: 2, accountId: 2,
> >> > > > instanceType: None, instanceId: null, cmd:
> >> > > > org.apache.cloudstack.api.command.user.network.RestartNetworkCmd,
> >> > > cmdInfo:
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> {"makeredundant":"false","cleanup":"false","response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"3200","id":"6e644551-5aee-4c9b-a75f-134f544ee97c","ctxDetails":"{\"interface
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> com.cloud.network.Network\":\"6e644551-5aee-4c9b-a75f-134f544ee97c\"}","ctxAccountId":"2","uuid":"6e644551-5aee-4c9b-a75f-134f544ee97c","cmdEventType":"NETWORK.RESTART","_":"1553853224533"},
> >> > > > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode:
> 0,
> >> > > > result: null, initMsid: 279278805450993, completeMsid: null,
> >> > lastUpdated:
> >> > > > null, lastPolled: null, created: null}
> >> > > > 2019-03-29 10:53:42,540 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> >> > > > (API-Job-Executor-2:ctx-5a2a6bbe job-16321) (logid:155f349b)
> >> Executing
> >> > > > AsyncJobVO {id:16321, userId: 2, accountId: 2, instanceType: None,
> >> > > > instanceId: null, cmd:
> >> > > > org.apache.cloudstack.api.command.user.network.RestartNetworkCmd,
> >> > > cmdInfo:
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> {"makeredundant":"false","cleanup":"false","response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"3200","id":"6e644551-5aee-4c9b-a75f-134f544ee97c","ctxDetails":"{\"interface
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> 

Re: cannot start system VMs: disaster after maintenance followup

2019-03-21 Thread Ivan Kudryavtsev
1:45:01.168+: 566: info : libvirt version: 4.5.0, package:
> > >> > 10.el7_6.6
> > >> > > (CentOS BuildSystem <http://bugs.centos.org>,
> 2019-03-14-10:21:47,
> > >> > > x86-01.bsys.centos.org)
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:45:01.168+: 566: info : hostname:
> > mtl1-apphst03.mt.pbt.com.mt
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:45:01.168+: 566: error : virFirewallApplyRuleDirect:709 :
> > >> internal
> > >> > > error: Failed to apply firewall rules /usr/sbin/iptables -w
> --table
> > >> nat
> > >> > > --insert POSTROUTING --source 192.168.122.0/24 '!' --destination
> > >> > > 192.168.122.0/24 --jump MASQUERADE: iptables v1.4.21: can't
> > >> initialize
> > >> > > iptables table `nat': Table does not exist (do you need to
> insmod?)
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> Perhaps
> > >> > > iptables
> > >> > > or your kernel needs to be upgraded.
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt dnsmasq[12206]: read
> > >> > > /etc/hosts
> > >> > > - 4 addresses
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt dnsmasq[12206]: read
> > >> > > /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt dnsmasq-dhcp[12206]:
> > read
> > >> > > /var/lib/libvirt/dnsmasq/default.hostsfile
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:45:01.354+: 566: warning : virSecurityManagerNew:189 :
> > >> Configured
> > >> > > security driver "none" disables default policy to create confined
> > >> guests
> > >> > > Mar 21 11:49:57 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:49:57.354+: 542: warning : qemuDomainObjTaint:7521 : Domain
> > >> id=2
> > >> > > name='s-1-VM' uuid=1a06d3a7-4e3f-4cba-912f-74ae24569bac is
> tainted:
> > >> > > high-privileges
> > >> > > Mar 21 11:49:59 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:49:59.402+: 540: warning : qemuDomainObjTaint:7521 : Domain
> > >> id=3
> > >> > > name='v-2-VM' uuid=af2a8342-cd9b-4b55-ba12-480634a31d65 is
> tainted:
> > >> > > high-privileges
> > >> > >
> > >> > >
> > >> > > What can be done about that ?
> > >> > >
> > >> >
> > >> >
> > >> > --
> > >> >
> > >> > Andrija Panić
> > >> >
> > >>
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> > --
> >
> > Andrija Panić
> >
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: Disaster after maintenance

2019-03-19 Thread Ivan Kudryavtsev
Jevgeniy, it may be a documentation bug. Take s look:
https://github.com/apache/cloudstack-documentation/pull/27/files

вт, 19 мар. 2019 г., 9:09 Jevgeni Zolotarjov :

> That's it - libvirtd failed to start on second host.
> Tried restarting, but it does not start.
>
>
> >> Do you have some NUMA constraints or anything which requires particular
> RAM configuration?
> No
>
>  libvirtd.service - Virtualization daemon
>Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
> vendor preset: enabled)
>Active: failed (Result: start-limit) since Tue 2019-03-19 13:03:07 GMT;
> 12s ago
>  Docs: man:libvirtd(8)
>https://libvirt.org
>   Process: 892 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited,
> status=1/FAILURE)
>  Main PID: 892 (code=exited, status=1/FAILURE)
> Tasks: 19 (limit: 32768)
>CGroup: /system.slice/libvirtd.service
>├─11338 /usr/sbin/libvirtd -d -l
>├─11909 /usr/sbin/dnsmasq
> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>└─11910 /usr/sbin/dnsmasq
> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: Failed to start
> Virtualization daemon.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: Unit
> libvirtd.service entered failed state.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: libvirtd.service
> failed.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: libvirtd.service
> holdoff time over, scheduling restart.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: Stopped
> Virtualization daemon.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: start request
> repeated too quickly for libvirtd.service
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: Failed to start
> Virtualization daemon.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: Unit
> libvirtd.service entered failed state.
> Mar 19 13:03:07 mtl1-apphst04.mt.pbt.com.mt systemd[1]: libvirtd.service
> failed.
>
>
> On Tue, Mar 19, 2019 at 3:04 PM Paul Angus 
> wrote:
>
> > Can you check that the cloudstack agent is running on the host and the
> > agent logs (usual logs directory)
> > Also worth checking that libvirt has started ok.  Do you have some NUMA
> > constraints or anything which requires particular RAM configuration?
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > Amadeus House, Floral Street, London  WC2E 9DPUK
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Jevgeni Zolotarjov 
> > Sent: 19 March 2019 14:49
> > To: users@cloudstack.apache.org
> > Subject: Re: Disaster after maintenance
> >
> > Can you try migrating a VM to the server that you changed the RAM amount?
> >
> > Also:
> > What is the hypervisor version?
> > KVM
> > QEMU Version : 2.0.0
> > Release : 1.el7.6
> >
> >
> > Host status in ACS?
> > 1st server: Unsecure
> > 2nd server: Disconnected
> >
> > Did you try to force a VM to start/deploy in this server where you
> changed
> > the RAM?
> > Host status became disconnected. I don't know how to make it "connected"
> > again
> >
> >
> >
> > On Tue, Mar 19, 2019 at 2:42 PM Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Can you try migrating a VM to the server that you changed the RAM
> amount?
> > >
> > > Also:
> > > What is the hypervisor version?
> > > Host status in ACS?
> > > Did you try to force a VM to start/deploy in this server where you
> > > changed the RAM?
> > >
> > >
> > > On Tue, Mar 19, 2019 at 9:39 AM Jevgeni Zolotarjov
> > >  > > >
> > > wrote:
> > >
> > > > We have Cloudstack 4.11.2 setup running fine for few months (>4) The
> > > > setup is very simple: 2 hosts We decided to do a maintenance to
> > > > increase RAM on both servers
> > > >
> > > > For this we put first server to maintenance. All VMS moved to second
> > > > host after a while.
> > > >
> > > > Then first server was shutdown, RAM increased, server turned ON.
> > > > Now nothing starts on first server.
> > > >
> > > >
> > > > Tried to delete network, but this fails as well
> > > >
> > > > Please help !
> > > >
> > > > Here is extract from log:
> > > > ==
> > > > 2019-03-19 12:27:53,064 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
> > > > (secstorage-1:ctx-16d6c797) (logid:7e3160ce) Zone 1 is ready to
> > > > launch secondary storage VM
> > > > 2019-03-19 12:27:53,125 DEBUG [c.c.c.ConsoleProxyManagerImpl]
> > > > (consoleproxy-1:ctx-cbd034b9) (logid:0a8c8bf4) Zone 1 is ready to
> > > > launch console proxy
> > > > 2019-03-19 12:27:53,181 DEBUG [c.c.a.ApiServlet]
> > > > (qtp510113906-285:ctx-6c5e11c3) (logid:cd8e30be) ===START===
> > > 192.168.5.140
> > > > -- GET
> > > >
> > > >
> > > command=deleteNetwork=4ba834ed-48f3-468f-b667-9bb2d2c258f1
> > > =json&_=1552998473154
> > > > 2019-03-19 12:27:53,186 DEBUG 

Re: Modify # of cores and cpu speed of a VM

2019-03-18 Thread Ivan Kudryavtsev
Fariborz, It's just because native CloudStack UI lacks of this
functionality. You can easily change thru API or CloudMonkey or our
CloudStack-UI, which handles that easily.

пн, 18 мар. 2019 г. в 14:59, Fariborz Navidan :

> Forgot to mention. The VM which is on a customized compute offering. Also
> change service offering feature does not list available computer offerings.
>
> Thanks
>
> On Mon, Mar 18, 2019 at 10:27 PM Fariborz Navidan 
> wrote:
>
> > Hello,
> > Cloudstack UI does not allow editing number of cores and cpu speed of a
> > VM, Is it technically possible to modify cpu setings without redeploying
> > from template?
> >
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


[CloudStack-UI] Release 1.411.29

2019-03-15 Thread Ivan Kudryavtsev
Hello community,

after several months we release 1.411.29 version of CSUI. It's notable,
that this is the first version which we have deployed into the production
to our customer with some 3rd-party integrations.

Read release overview:
https://bitworks.software/en/2019-03-13-cloudstack-ui-141129-is-out.html

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: Effect of disk caching on performance

2019-03-13 Thread Ivan Kudryavtsev
Hi, Fariborz.
Several days ago there was a discussion about it. "NONE" or "WRITETHROUGH"
must be used when possible. WRITEBACK may be used __ONLY__ if you don't
care about data loss, e.g. ephemeral VMS which destroyed upon the crash.

Add more RAM to VM, implement bcache or LVMcache, add more drives, use SSD,
NVMe and forget about the games with stuff above.

ср, 13 мар. 2019 г. в 11:19, Fariborz Navidan :

> Hello All,
>
> How does disk caching affects the VM's performance? If yes, which type of
> disk caching do you advise?
>
> Thanks
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: Hosts power control automation via IPMI, iLO, DRAC

2019-03-11 Thread Ivan Kudryavtsev
Konstantin,

in general, this feature is very closely coupled with VM live migration,
which is usually undesired and run under human control, and the
implementation depends a lot of the compaction policy used in cloud...
Actually it can be implemented easily outside the cloudstack.

Personally, I recommend implementing it outside the CloudStack. We have run
such compaction experiments not only to optimize power consumption but also
to optimize cpu load over the cloud allowing to have nodes with larger
RAM/cpu rate.

So, in general, I don't think it should be done inside the CS core.

пн, 11 мар. 2019 г., 14:20 Konstantin :

> Hello, Cloudstackers :)
>
> I would like to offer you a topic to discuss and share your experience:
>
> Does any of you tried to arrange host`s power control and link it to the
> cloud workload?
>
> The idea is to automate hosts switch on / switch off depend of current
> workload and number of VMs running.
>
> It should be possible to control current workload of the hosts and demand
> for additional resources to proactive start of shutdown the hypervisors and
> save energy, environment and and our money.
>
> All out-of-the-band features is already here
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
>
> and most of the hypervisors (XEN and ESXi at least) can be controlled
> remotely with no issue
>
>
> What do you think?
>
>
> Regards,
> Konstantin
>


Re: Cloudstack is unable to create new VM

2019-03-09 Thread Ivan Kudryavtsev
What about RAM and storage?

сб, 9 мар. 2019 г., 9:22 Fariborz Navidan :

> I have already set cluster.cpu.allocated.capacity.disablethreshold to 1
> and cpu.overprovisioning.factor to 10 both in global settings and cluster
> level.
>
> Best Regards
>
> On Sat, Mar 9, 2019 at 6:47 PM Fariborz Navidan 
> wrote:
>
> > Hello
> >
> > This is host's cpu resource statistics:
> >
> > CPU Utilized: 30.2%
> > CPU Allocated for VMs: 84.82%
> >
> > On Sat, Mar 9, 2019 at 4:20 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> >> Looks like your cluster is pretty full. Increase thresholds in cluster
> >> vars
> >> or add resources.
> >>
> >> сб, 9 мар. 2019 г., 6:55 Rafael Weingärtner <
> rafaelweingart...@gmail.com
> >> >:
> >>
> >> > In the log there is this message:
> >> > > Cannot allocate cluster list [1] for vm creation since their
> allocated
> >> > percentage crosses the disable capacity threshold defined at each
> >> > cluster/at global value for capacity Type : 1, skipping these clusters
> >> >
> >> > What is the status of your cluster's host?
> >> >
> >> > On Sat, Mar 9, 2019 at 7:14 AM Fariborz Navidan <
> mdvlinqu...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hello,
> >> > >
> >> > > Here is the log for VM deployment:
> >> > >
> >> > > 2019-03-09 11:01:34,984 DEBUG [c.c.a.ApiServlet]
> >> > > (qtp788117692-18:ctx-f51fd578) (logid:113f467d) ===START===
> >> 137.74.35.65
> >> > > -- GET
> >> > >
> >> > >
> >> >
> >>
> command=deployVirtualMachine=json=bc4565d8-4029-4dbd-93eb-47137ff45188=4c10414a-51f2-4859-b673-e92c3f32cbfd=KVM=10=cd1235b8-acdf-4ce3-a3ac-3f7ddcea8870=centos=centos&_=1552125695843
> >> > > 2019-03-09 11:01:34,989 DEBUG [c.c.a.ApiServer]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d) CIDRs
> >> from
> >> > > which account 'Acct[27cd01ef-3907-11e9-87ab-a4bf012ed1a6-admin]' is
> >> > allowed
> >> > > to perform API calls: 0.0.0.0/0,::/0
> >> > > 2019-03-09 11:01:35,040 DEBUG [c.c.u.AccountManagerImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d) Access
> >> > granted
> >> > > to Acct[27cd01ef-3907-11e9-87ab-a4bf012ed1a6-admin] to
> >> > >
> >> > >
> >> >
> >>
> org.apache.cloudstack.quota.vo.ServiceOfferingVO$$EnhancerByCGLIB$$c370fca@68de8a96
> >> > > by AffinityGroupAccessChecker
> >> > > 2019-03-09 11:01:35,042 DEBUG [c.c.u.AccountManagerImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d) Access
> >> > granted
> >> > > to Acct[27cd01ef-3907-11e9-87ab-a4bf012ed1a6-admin] to null by
> >> > > AffinityGroupAccessChecker
> >> > > 2019-03-09 11:01:35,046 DEBUG [c.c.n.NetworkModelImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d) Service
> >> > > SecurityGroup is not supported in the network id=204
> >> > > 2019-03-09 11:01:35,058 DEBUG [c.c.n.NetworkModelImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d) Service
> >> > > SecurityGroup is not supported in the network id=204
> >> > > 2019-03-09 11:01:35,074 DEBUG [c.c.v.UserVmManagerImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d)
> >> Rootdisksize
> >> > > override validation successful. Template root disk size 8GB Root
> disk
> >> > size
> >> > > specified 10GB
> >> > > 2019-03-09 11:01:35,081 DEBUG [c.c.v.UserVmManagerImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d)
> >> Allocating
> >> > in
> >> > > the DB for vm
> >> > > 2019-03-09 11:01:35,098 DEBUG [c.c.v.VirtualMachineManagerImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d)
> >> Allocating
> >> > > entries for VM: VM[User|i-2-68-VM]
> >> > > 2019-03-09 11:01:35,099 DEBUG [c.c.v.VirtualMachineManagerImpl]
> >> > > (qtp788117692-18:ctx-f51fd578 ctx-a82cdf12) (logid:113f467d)
> >> Allocating
> >> > > nics for VM[User|i-2-68-VM]
> >> > > 2019-03-09 11:01:35,100 DEBUG [o.a.c.e.o.N

Re: CloudStack Container Service deployment question

2019-03-09 Thread Ivan Kudryavtsev
AFAIK, the last CS compatible with container service is 4.6 or smth like
that.

сб, 9 мар. 2019 г., 9:04 Konstantin :

> Hello!
>
> I have following the installation guide here
>
> http://downloads.shapeblue.com/ccs/1.0/Installation_and_Administration_Guide.pdf
>
> When I trying to create my first container cluster via plugin menu, I got
> this error message
>
> 2019-03-09 13:58:00,791 DEBUG [c.c.a.ApiServlet]
> (qtp836514715-14:ctx-0b74e301 ctx-d080970b) (logid:ddeedd55) ===END===
> 192.*.*.*-- GET
>
> command=createContainerCluster=json=test==ec47bdd9-9e7e-4553-a0fb-788760d6ce63=ab6527f4-1fab-4c08-bbbf-c97f49e2bef0=1=&_=1552138846129
> 2019-03-09 13:58:00,791 WARN  [o.e.j.s.HttpChannel] (qtp836514715-14:null)
> (logid:) /client/api
> java.lang.NoSuchMethodError:
> com.cloud.offerings.NetworkOfferingVO.getEgressDefaultPolicy()Z
> at
>
> com.cloud.containercluster.ContainerClusterManagerImpl.isContainerServiceConfigured(ContainerClusterManagerImpl.java:1631)
> at
>
> com.cloud.containercluster.ContainerClusterManagerImpl.createContainerCluster(ContainerClusterManagerImpl.java:298)
> at
>
> org.apache.cloudstack.api.command.user.containercluster.CreateContainerClusterCmd.create(CreateContainerClusterCmd.java:256)
> at
>
> com.cloud.api.dispatch.CommandCreationWorker.handle(CommandCreationWorker.java:47)
> at
> com.cloud.api.dispatch.DispatchChain.dispatch(DispatchChain.java:37)
> at
> com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:88)
> at com.cloud.api.ApiServer.queueCommand(ApiServer.java:682)
> at com.cloud.api.ApiServer.handleRequest(ApiServer.java:582)
> at
> com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:310)
> at com.cloud.api.ApiServlet$1.run(ApiServlet.java:130)
> at
>
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at
>
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at
>
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:127)
> at com.cloud.api.ApiServlet.doGet(ApiServlet.java:89)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:686)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:791)
> at
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:535)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
> at
>
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
> at
>
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
> at
>
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
> at
>
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
>
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:527)
> at
>
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
> at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at org.eclipse.jetty.server.Server.handle(Server.java:530)
> at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)
> at
> org.eclipse.jetty.io
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
> at org.eclipse.jetty.io
> .FillInterest.fillable(FillInterest.java:102)
> at
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
> at
>
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:247)
> at
>
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
> at
>
> 

Re: Cloudstack is unable to create new VM

2019-03-09 Thread Ivan Kudryavtsev
> > Updating resource Type = user_vm count for Account = 2 Operation =
> > decreasing Amount = 1
> > 2019-03-09 11:01:35,208 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-1:ctx-0c22fa00 job-508 ctx-b6238917) (logid:57397166)
> > Updating resource Type = cpu count for Account = 2 Operation = decreasing
> > Amount = 1
> > 2019-03-09 11:01:35,209 DEBUG [c.c.a.m.AgentManagerImpl]
> > (AgentManager-Handler-12:null) (logid:) SeqA 11-96232: Processing Seq
> > 11-96232:  { Cmd , MgmtId: -1, via: 11, Ver: v1, Flags: 11,
> >
> >
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":12,"_loadInfo":"{\n
> > \"connections\": []\n}","wait":0}}] }
> > 2019-03-09 11:01:35,210 DEBUG [c.c.a.m.AgentManagerImpl]
> > (AgentManager-Handler-12:null) (logid:) SeqA 11-96232: Sending Seq
> > 11-96232:  { Ans: , MgmtId: 279278805450982, via: 11, Ver: v1, Flags:
> > 100010,
> > [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> > 2019-03-09 11:01:35,210 DEBUG [c.c.r.ResourceLimitManagerImpl]
> > (API-Job-Executor-1:ctx-0c22fa00 job-508 ctx-b6238917) (logid:57397166)
> > Updating resource Type = memory count for Account = 2 Operation =
> > decreasing Amount = 512
> > 2019-03-09 11:01:35,215 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
> > (API-Job-Executor-1:ctx-0c22fa00 job-508 ctx-b6238917) (logid:57397166)
> > com.cloud.exception.InsufficientServerCapacityException: Unable to
> create a
> > deployment for VM[User|i-2-68-VM]Scope=interface com.cloud.dc.DataCenter;
> > id=1
> > 2019-03-09 11:01:35,215 INFO  [o.a.c.a.c.a.v.DeployVMCmdByAdmin]
> > (API-Job-Executor-1:ctx-0c22fa00 job-508 ctx-b6238917) (logid:57397166)
> > Unable to create a deployment for VM[User|i-2-68-VM]
> > com.cloud.exception.InsufficientServerCapacityException: Unable to
> create a
> > deployment for VM[User|i-2-68-VM]Scope=interface com.cloud.dc.DataCenter;
> > id=1
> > at
> >
> >
> org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:215)
> > at
> >
> >
> org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:200)
> > at
> >
> >
> com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4492)
> > at
> >
> >
> com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4057)
> > at
> >
> >
> com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4044)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> >
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
> > at
> >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
> > at
> >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> > at
> >
> >
> org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:107)
> > at
> >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
> > at
> >
> >
> com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
> > at
> >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
> > at
> >
> >
> org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
> > at
> >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> > at
> >
> >
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
> > at com.sun.proxy.$Proxy164.startVirtualMachine(Unknown Source)
> > at
> >
> >
> org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin.execute(DeployVMCmdByAdmin.java:50)
> > at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
> > at
> >
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
> > at
> 

Re: Cloudstack is unable to create new VM

2019-03-08 Thread Ivan Kudryavtsev
Hi Fariborz, there are no prophets here, we are just humans. Find relevant
logs and post here for a review.

пт, 8 мар. 2019 г., 11:00 Fariborz Navidan :

> Hello,
>
> I get the following error when creating new VM on a KVM cluster. Unable to
> create a deployment for VM[...]
>
> Please help me
>
> Thanks
>


Re: downloaded template vs disk service offering

2019-03-08 Thread Ivan Kudryavtsev
Avoid HV caching when possible, better to add RAM to VM or manage in-VM
writeback settings. Going with HV writeback you end up with angry users
someday, who lost much more data than you can imagine even if you don't use
migrations at all.

Want faster operations - improve your storage, build R0 among attached
disks, combine SSD, HDD in a single VM to deliver bcache, lvmcache, etc.
Use ceph cache pools on nvme over low latency dc switches, but don't use
writeback.

пт, 8 мар. 2019 г., 6:54 Andrija Panic :

> Hi Piotr,
>
>
> https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/cha.cachemodes.html#sec.cache.mode.live.migration
>
> So yes, CEPH being considered "clustered storage" - live migration works -
> but in case of QCOW2 (NFS) it doesn't actually work.
>
> BTW, as for CEPH, you would probably want to also check RBD client side
> write-back cache... (versus/instead qemu cache=writeback) (i.e. 32MB
> writeback cache in librbd per each volume, etc.).
> I believe I did test one versus another caching (was operating CEPH backed
> CloudStack installation myself a while ago) - afaik, there were no visible
> performance/latency differences in RBD write-back caching versus qemu
> writeback caching (both active = issues with performance)
>
> Kind regards,
> Andrija
>
> andrija.pa...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Piotr Pisz 
> Sent: 08 March 2019 12:44
> To: users@cloudstack.apache.org
> Subject: RE: downloaded template vs disk service offering
>
> Hey Andrija,
>
> Thank you for the explanation, now I finally understand how it works :-)
> As for live migration, the migration of such machines (with cache =
> writeback) in ceph rbd (centos 7 kvm) works without any problem.
>
> Regards,
> Piotr
>
>
> -Original Message-
> From: Andrija Panic 
> Sent: Friday, March 8, 2019 9:22 AM
> To: users@cloudstack.apache.org
> Subject: RE: downloaded template vs disk service offering
>
> Hi Piotr,
>
> It's true that when setting cache mode for Disk offering via GUI doesn't
> get it implemented in DB (does API works fine, did you test it ? if so
> please raise the GitHub issue with description).
>
> In general, you can initially set cache mode for disk only on Disk
> Offering (possibly also on Compute Offering for the Root disk).
> When you make new template from existing disk, this new template will have
> source_template_id field in vm_templates table (on it's row) set to your
> original template from which you created the volum (template --> disk -->
> new template)
>
> Also worth noting - all volumes are inheriting "on he fly" (when you start
> VM) this cache mode setting from it's template (all volumes have
> "template_id" field in "volumes" table)
>
> So if you set cache_mode (via DB) for specific template, it will affect
> ALL VMs created from that template...(once you stop and start those VMs,
> obviously) - i.e. when you deploy new VM, some column values are copied
> over to the actual volume row/table, but some are just read on the fly, as
> this cache_mode.
>
> Nevertheless, I would strongly discourage using write-back cache for
> disks, since:
>
> - it can be severely risky, in case of power loss, kernel panic, etc - you
> will end up with corrupted volumes.
> - VMs can NOT be live migrated (at least with KVM), with cache set to
> anything else than none (google it yourself) - happy to learn if this
> limitation is present for other Hypervisors as well
>
> Fine to play with, but I would skip it in production.
>
> Kind regards,
> Andrija
>
> andrija.pa...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK @shapeblue
>
>
>
>
> -Original Message-
> From: Piotr Pisz 
> Sent: 08 March 2019 08:32
> To: users@cloudstack.apache.org
> Subject: downloaded template vs disk service offering
>
> Hi Users :-)
>
> I have a question.
> If from the disk for which the cache = writeback paramter was set, I make
> a template, all new machines have cache = writeback. And that's ok.
> If I load a template from outside, volume has cache = none. I have not
> found a place in DB where I could improve this parameter.
> Do you know where we can set the template cache?
>
> PS. Disk offering made with GUI does not set the cache parameter in DB...
>
> Regards,
> Piotr
>
>
>


Re: Why and when is expected to remove basic network support?

2019-03-03 Thread Ivan Kudryavtsev
No, you don't. Only vlan support may be required, but depends on chosen
model.


Re: Why and when is expected to remove basic network support?

2019-03-03 Thread Ivan Kudryavtsev
Ignacio, basic is planned for the removal because 'advanced shared with sg'
does the same. So it is just duplicated functionality. That is why it will
be removed. If you are not in prod, use the option above to ensure better
future compatibility.

вс, 3 мар. 2019 г., 15:03 Ignacio Ocampo :

> Hi all,
>
> From a discussion about "Why CloudStack 5" I saw there are plans to remove
> basic network support.
>
> I'm still running a proof of concept with CloudStack using Basic network,
> and I would like to know more about the why and when :)
>
> This in order make better and informed decisions on the setup of my
> environment to comply with the future roadmap of the project.
>
> Also, if you could share technical details about pros and cons of basic vs
> advanced networking based in your experience, will be great.
>
> Thanks!
>
> --
> Ignacio Ocampo
>


Re: CPU speed in service offering

2019-03-03 Thread Ivan Kudryavtsev
I copied the query from my .mysql_history. Maybe it's RO view, but in MySQL
there are updateable views:
http://www.mysqltutorial.org/create-sql-updatable-views.aspx


вс, 3 мар. 2019 г., 7:55 Thomas Joseph :

> A view table is readonly, the table named 'service_offering' would be the
> one needs updating.
>
> mysql> desc service_offering;
>
> ++--+--+-+-++
> | Field  | Type | Null | Key | Default |
> Extra  |
>
> ++--+--+-+-++
> | id | bigint(20) unsigned  | NO   | PRI | NULL|
> auto_increment |
> | cpu| int(10) unsigned | YES  | | NULL
> ||
> | speed  | int(10) unsigned | YES  | | NULL
> ||
> | ram_size   | bigint(20) unsigned  | YES  | | NULL
> ||
> | nw_rate| smallint(5) unsigned | YES  | | 200
> ||
> | mc_rate| smallint(5) unsigned | YES  | | 10
> ||
> | ha_enabled | tinyint(1) unsigned  | NO   | | 0
> ||
> | limit_cpu_use  | tinyint(1) unsigned  | NO   | | 0
> ||
> | host_tag   | varchar(255) | YES  | | NULL
> ||
> | default_use| tinyint(1) unsigned  | NO   | | 0
> ||
> | vm_type| varchar(32)  | YES  | | NULL
> ||
> | sort_key   | int(32)  | NO   | | 0
> ||
> | is_volatile| tinyint(1) unsigned  | NO   | | 0
> ||
> | deployment_planner | varchar(255) | YES  | | NULL
> ||
>
> ++--+------+-+-----++
> 14 rows in set (0.00 sec)
>
> regards,
> Thomas
>
> On Sat, Mar 2, 2019 at 8:28 PM Ivan Kudryavtsev 
> wrote:
>
> > CPU freq is per-core. The only way is to fix it in the database and next
> > stop/start VMs.
> >
> > Here is the SQL expr like the following:
> >
> > update service_offering_view set limit_cpu_use=0 where name like 'abc%'
> and
> > domain_path='/cde/';
> >
> > сб, 2 мар. 2019 г. в 15:11, Fariborz Navidan :
> >
> > > Also, how do I disable cpu cap for existing vms?
> > >
> > > On Sat, Mar 2, 2019 at 11:35 PM Fariborz Navidan <
> mdvlinqu...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Indeed I didn't realized the answer. The one we set in a service
> > offering
> > > > is allocated for each core or in total for whole the VM?
> > > >
> > > > On Sat, Mar 2, 2019 at 10:35 PM Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > > wrote:
> > > >
> > > >> Actually, it works like everyone expects. In case of KVM you can
> just
> > > take
> > > >> a look at running instance with ps xa. But, I don't recommend
> setting
> > > CPU
> > > >> cap, though... VM will experience CPU steal, and users will not be
> > > happy.
> > > >> Better to deploy nodes with low cpu frequency and many cores.
> > > >>
> > > >> Without capping, the frequency is only for the resource calculation,
> > > when
> > > >> VMs are deployed, specifically:
> > > >> node.core-freq > freq && aggregate-node-freq-avail - cores x freq >
> 0
> > ->
> > > >> permit
> > > >>
> > > >> сб, 2 мар. 2019 г., 13:51 Fariborz Navidan :
> > > >>
> > > >> > Hi,
> > > >> >
> > > >> > I am wondering how the cpu time usage is calculated for a VM. Is
> it
> > in
> > > >> per
> > > >> > core basis or the total fraction of cpu a vm can use. For example,
> > > when
> > > >> we
> > > >> > set 2 cores and 2000 MHz, the VM receives total of 2000MHz of
> > 4000MHz
> > > >> > processing power?
> > > >> >
> > > >> > Thanks
> > > >> >
> > > >>
> > > >
> > >
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
>


Re: CPU speed in service offering

2019-03-02 Thread Ivan Kudryavtsev
CPU freq is per-core. The only way is to fix it in the database and next
stop/start VMs.

Here is the SQL expr like the following:

update service_offering_view set limit_cpu_use=0 where name like 'abc%' and
domain_path='/cde/';

сб, 2 мар. 2019 г. в 15:11, Fariborz Navidan :

> Also, how do I disable cpu cap for existing vms?
>
> On Sat, Mar 2, 2019 at 11:35 PM Fariborz Navidan 
> wrote:
>
> > Hi,
> >
> > Indeed I didn't realized the answer. The one we set in a service offering
> > is allocated for each core or in total for whole the VM?
> >
> > On Sat, Mar 2, 2019 at 10:35 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> >> Actually, it works like everyone expects. In case of KVM you can just
> take
> >> a look at running instance with ps xa. But, I don't recommend setting
> CPU
> >> cap, though... VM will experience CPU steal, and users will not be
> happy.
> >> Better to deploy nodes with low cpu frequency and many cores.
> >>
> >> Without capping, the frequency is only for the resource calculation,
> when
> >> VMs are deployed, specifically:
> >> node.core-freq > freq && aggregate-node-freq-avail - cores x freq > 0 ->
> >> permit
> >>
> >> сб, 2 мар. 2019 г., 13:51 Fariborz Navidan :
> >>
> >> > Hi,
> >> >
> >> > I am wondering how the cpu time usage is calculated for a VM. Is it in
> >> per
> >> > core basis or the total fraction of cpu a vm can use. For example,
> when
> >> we
> >> > set 2 cores and 2000 MHz, the VM receives total of 2000MHz of 4000MHz
> >> > processing power?
> >> >
> >> > Thanks
> >> >
> >>
> >
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: CPU speed in service offering

2019-03-02 Thread Ivan Kudryavtsev
Actually, it works like everyone expects. In case of KVM you can just take
a look at running instance with ps xa. But, I don't recommend setting CPU
cap, though... VM will experience CPU steal, and users will not be happy.
Better to deploy nodes with low cpu frequency and many cores.

Without capping, the frequency is only for the resource calculation, when
VMs are deployed, specifically:
node.core-freq > freq && aggregate-node-freq-avail - cores x freq > 0 ->
permit

сб, 2 мар. 2019 г., 13:51 Fariborz Navidan :

> Hi,
>
> I am wondering how the cpu time usage is calculated for a VM. Is it in per
> core basis or the total fraction of cpu a vm can use. For example, when we
> set 2 cores and 2000 MHz, the VM receives total of 2000MHz of 4000MHz
> processing power?
>
> Thanks
>


Re: Change NIC MAC Address

2019-03-02 Thread Ivan Kudryavtsev
Hi,
CloudStack MACs use a specific generation scheme, so in general - NO, but
probably you can change *3 low octets* thru MySQL, however, I don't
recommend doing that.

сб, 2 мар. 2019 г. в 11:31, Fariborz Navidan :

> Hi,
>
> Is it possible to change MAC address of default NIC of a VM?
>
> Thanks
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: Snapshots on KVM corrupting disk images

2019-03-01 Thread Ivan Kudryavtsev
Hi, Sean,
I saw the PR https://github.com/apache/cloudstack/pull/3194
which seems covers one of the bugs. Haven't had enough time to dive into
the code to do a review for snapshot-related workflows, but looks like this
PR does the right thing. Hope it will be added to 4.11.3.

чт, 28 февр. 2019 г. в 17:02, Sean Lair :

> Hi Ivan, I wanted to respond here and see if you published a PR yet on
> this.
>
> This is a very scary issue for us as customer can snapshot their volumes
> and end up causing corruption - and they blame us.  It's already happened -
> luckily we had Storage Array level snapshots in place as a safety net...
>
> Thanks!!
> Sean
>
> -----Original Message-
> From: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Sent: Sunday, January 27, 2019 7:29 PM
> To: users ; cloudstack-fan <
> cloudstack-...@protonmail.com>
> Cc: dev 
> Subject: Re: Snapshots on KVM corrupting disk images
>
> Well, guys. I dived into CS agent scripts, which make volume snapshots and
> found there are no code for suspend/resume and also no code for qemu-agent
> call fsfreeze/fsthaw. I don't see any blockers adding that code yet and try
> to add it in nearest days. If tests go well, I'll publish the PR, which I
> suppose could be integrated into 4.11.3.
>
> пн, 28 янв. 2019 г., 2:45 cloudstack-fan
> cloudstack-...@protonmail.com.invalid:
>
> > Hello Sean,
> >
> > It seems that you've encountered the same issue that I've been facing
> > during the last 5-6 years of using ACS with KVM hosts (see this
> > thread, if you're interested in additional details:
> > https://mail-archives.apache.org/mod_mbox/cloudstack-users/201807.mbox
> > /browser
> > ).
> >
> > I'd like to state that creating snapshots of a running virtual machine
> > is a bit risky. I've implemented some workarounds in my environment,
> > but I'm still not sure that they are 100% effective.
> >
> > I have a couple of questions, if you don't mind. What kind of storage
> > do you use, if it's not a secret? Does you storage use XFS as a
> filesystem?
> > Did you see something like this in your log-files?
> > [***.***] XFS: qemu-kvm(***) possible memory allocation deadlock size
> > 65552 in kmem_realloc (mode:0x250)
> > [***.***] XFS: qemu-kvm(***) possible memory allocation deadlock size
> > 65552 in kmem_realloc (mode:0x250)
> > [***.***] XFS: qemu-kvm(***) possible memory allocation deadlock size
> > 65552 in kmem_realloc (mode:0x250)
> > Did you see any unusual messages in your log-file when the disaster
> > happened?
> >
> > I hope, things will be well. Wish you good luck and all the best!
> >
> >
> > ‐‐‐ Original Message ‐‐‐
> > On Tuesday, 22 January 2019 18:30, Sean Lair 
> wrote:
> >
> > > Hi all,
> > >
> > > We had some instances where VM disks are becoming corrupted when
> > > using
> > KVM snapshots. We are running CloudStack 4.9.3 with KVM on CentOS 7.
> > >
> > > The first time was when someone mass-enabled scheduled snapshots on
> > > a
> > lot of large number VMs and secondary storage filled up. We had to
> > restore all those VM disks... But believed it was just our fault with
> > letting secondary storage fill up.
> > >
> > > Today we had an instance where a snapshot failed and now the disk
> > > image
> > is corrupted and the VM can't boot. here is the output of some commands:
> > >
> > >
> > --
> > --
> > --
> > --
> > --
> > --
> > --
> > 
> > >
> > > [root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img
> > > check
> > ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
> > > qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80':
> &

Re: installation-database

2019-02-20 Thread Ivan Kudryavtsev
Looks like this is what exactly means. Check your mysql configuration for
root connection from the host you use to deploy. First try with 'mysql'
cli utility.

ср, 20 февр. 2019 г. в 13:46, Alejandro Ruiz Bermejo <
arbermejo0...@gmail.com>:

> i got this error when creating the cloudstack management database and don't
> know where the problem is. Any idea??
>
> the command i use was cloudstack-setup-databases cloud:123@10.8.9.230
> --deploy-as=root:123  -i 10.8.9.230
>
>
> ERROR 2003 (HY000): Can't connect to MySQL server on '10.8.9.230' (111)
>
>
> Sql parameters:
> {'passwd': '123', 'host': '10.8.9.230', 'user': 'root', 'port': 3306}
>
> this is the content of  /etc/cloudstack/management/db.properties
>
> cluster.node.IP=10.8.9.230
> cluster.servlet.port=9090
> region.id=1
>
> # CloudStack database settings
> db.cloud.username=cloud
> db.cloud.password=cloud
> db.cloud.host=localhost
> db.cloud.driver=jdbc:mysql
> db.cloud.port=3306
> db.cloud.name=cloud
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


Re: installation troubles

2019-02-20 Thread Ivan Kudryavtsev
That's OK. Just go ahead.

ср, 20 февр. 2019 г. в 12:14, Alejandro Ruiz Bermejo <
arbermejo0...@gmail.com>:

> when i try with that option i'm getting signature errors. Even after wget
> -O - http://download.cloudstack.org/release.asc|apt-key add -
>
> On Wed, Feb 20, 2019 at 12:11 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> wrote:
>
> > This looks strange to me. I always use repo
> >
> > root@cs2-head1:~# cat /etc/apt/sources.list.d/cloudstack.list
> > deb http://cloudstack.apt-get.eu/ubuntu xenial 4.11
> >
> > and it works just fine.
> >
> > ср, 20 февр. 2019 г. в 12:08, Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com>:
> >
> > > but i'm following the instrucions of the oficial guide at
> > > http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/index.html
> > >
> > > It is why i'm asking
> > >
> > > what other way do i have
> > >
> > > On Wed, Feb 20, 2019 at 12:05 PM Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > wrote:
> > >
> > > > Alejandro,
> > > > I'm not sure the community is able to help you with local DEB repo
> you
> > > host
> > > > in the network. Try to use the guide and install from the official
> > > > repository.
> > > >
> > > > ср, 20 февр. 2019 г. в 11:55, Alejandro Ruiz Bermejo <
> > > > arbermejo0...@gmail.com>:
> > > >
> > > > > i'm the network administrator, and the repository is a local one
> that
> > > we
> > > > > ave. But that's not the problem. the thing is with the one with IP
> > > > address
> > > > > 10.8.9.230, wich is where i'm serving the .deb packages as the
> guide
> > > says
> > > > >
> > > > > On Wed, Feb 20, 2019 at 11:35 AM Ivan Kudryavtsev <
> > > > > kudryavtsev...@bw-sw.com>
> > > > > wrote:
> > > > >
> > > > > > Hi. The network 10.X.X.X/8 is a private network which resides
> > inside
> > > > your
> > > > > > LAN. Consult with your network administrator. Doesn't look like
> you
> > > are
> > > > > > getting packages from the right download repository.
> > > > > >
> > > > > > ср, 20 февр. 2019 г. в 11:30, Alejandro Ruiz Bermejo <
> > > > > > arbermejo0...@gmail.com>:
> > > > > >
> > > > > > > Hi i'm new with openstack and i'm following the installation
> > guide
> > > of
> > > > > the
> > > > > > > oficial site. Untill now i already  got compiled the source
> > > packages
> > > > > and
> > > > > > > i'm adding the apt repository but when i run the apt-get update
> > i'm
> > > > > > having
> > > > > > > this error
> > > > > > >
> > > > > > > Ign:1 http://10.8.9.230/cloudstack/repo/binary ./ InRelease
> > > > > > > Ign:2 http://10.8.9.230/cloudstack/repo/binary ./ Release
> > > > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> > > Translation-en_US
> > > > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en
> > > > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > > > Hit:6 http://10.8.11.4/ubuntu xenial InRelease
> > > > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> > > Translation-en_US
> > > > > > > Hit:7 http://10.8.11.4/ubuntu xenial-updates InRelease
> > > > > > > Hit:8 http://10.8.11.4/ubuntu xenial-backports InRelease
> > > > > > > Hit:9 http://10.8.11.4/ubuntu xenial-security InRelease
> > > > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en
> > > > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> > > Translation-en_US
> > > > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en
> > > > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> > > Translation-en_US
> > >

Re: installation troubles

2019-02-20 Thread Ivan Kudryavtsev
Looks like documentation has became less straightforward:
Take a moment reading
http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/overview/index.html#package-repository
it directs to
http://cloudstack.apache.org/downloads.html

where you can find more about existing repos.

ср, 20 февр. 2019 г. в 12:11, Ivan Kudryavtsev :

> This looks strange to me. I always use repo
>
> root@cs2-head1:~# cat /etc/apt/sources.list.d/cloudstack.list
> deb http://cloudstack.apt-get.eu/ubuntu xenial 4.11
>
> and it works just fine.
>
> ср, 20 февр. 2019 г. в 12:08, Alejandro Ruiz Bermejo <
> arbermejo0...@gmail.com>:
>
>> but i'm following the instrucions of the oficial guide at
>> http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/index.html
>>
>> It is why i'm asking
>>
>> what other way do i have
>>
>> On Wed, Feb 20, 2019 at 12:05 PM Ivan Kudryavtsev <
>> kudryavtsev...@bw-sw.com>
>> wrote:
>>
>> > Alejandro,
>> > I'm not sure the community is able to help you with local DEB repo you
>> host
>> > in the network. Try to use the guide and install from the official
>> > repository.
>> >
>> > ср, 20 февр. 2019 г. в 11:55, Alejandro Ruiz Bermejo <
>> > arbermejo0...@gmail.com>:
>> >
>> > > i'm the network administrator, and the repository is a local one that
>> we
>> > > ave. But that's not the problem. the thing is with the one with IP
>> > address
>> > > 10.8.9.230, wich is where i'm serving the .deb packages as the guide
>> says
>> > >
>> > > On Wed, Feb 20, 2019 at 11:35 AM Ivan Kudryavtsev <
>> > > kudryavtsev...@bw-sw.com>
>> > > wrote:
>> > >
>> > > > Hi. The network 10.X.X.X/8 is a private network which resides inside
>> > your
>> > > > LAN. Consult with your network administrator. Doesn't look like you
>> are
>> > > > getting packages from the right download repository.
>> > > >
>> > > > ср, 20 февр. 2019 г. в 11:30, Alejandro Ruiz Bermejo <
>> > > > arbermejo0...@gmail.com>:
>> > > >
>> > > > > Hi i'm new with openstack and i'm following the installation
>> guide of
>> > > the
>> > > > > oficial site. Untill now i already  got compiled the source
>> packages
>> > > and
>> > > > > i'm adding the apt repository but when i run the apt-get update
>> i'm
>> > > > having
>> > > > > this error
>> > > > >
>> > > > > Ign:1 http://10.8.9.230/cloudstack/repo/binary ./ InRelease
>> > > > > Ign:2 http://10.8.9.230/cloudstack/repo/binary ./ Release
>> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
>> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
>> Translation-en_US
>> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
>> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
>> > > > > Hit:6 http://10.8.11.4/ubuntu xenial InRelease
>> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
>> Translation-en_US
>> > > > > Hit:7 http://10.8.11.4/ubuntu xenial-updates InRelease
>> > > > > Hit:8 http://10.8.11.4/ubuntu xenial-backports InRelease
>> > > > > Hit:9 http://10.8.11.4/ubuntu xenial-security InRelease
>> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
>> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
>> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
>> Translation-en_US
>> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
>> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
>> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
>> Translation-en_US
>> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
>> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
>> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
>> Translation-en_US
>> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
>> > > > > Err:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
>> > > > >   404  Not Found
>> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
>&

Re: installation troubles

2019-02-20 Thread Ivan Kudryavtsev
This looks strange to me. I always use repo

root@cs2-head1:~# cat /etc/apt/sources.list.d/cloudstack.list
deb http://cloudstack.apt-get.eu/ubuntu xenial 4.11

and it works just fine.

ср, 20 февр. 2019 г. в 12:08, Alejandro Ruiz Bermejo <
arbermejo0...@gmail.com>:

> but i'm following the instrucions of the oficial guide at
> http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/index.html
>
> It is why i'm asking
>
> what other way do i have
>
> On Wed, Feb 20, 2019 at 12:05 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> wrote:
>
> > Alejandro,
> > I'm not sure the community is able to help you with local DEB repo you
> host
> > in the network. Try to use the guide and install from the official
> > repository.
> >
> > ср, 20 февр. 2019 г. в 11:55, Alejandro Ruiz Bermejo <
> > arbermejo0...@gmail.com>:
> >
> > > i'm the network administrator, and the repository is a local one that
> we
> > > ave. But that's not the problem. the thing is with the one with IP
> > address
> > > 10.8.9.230, wich is where i'm serving the .deb packages as the guide
> says
> > >
> > > On Wed, Feb 20, 2019 at 11:35 AM Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > wrote:
> > >
> > > > Hi. The network 10.X.X.X/8 is a private network which resides inside
> > your
> > > > LAN. Consult with your network administrator. Doesn't look like you
> are
> > > > getting packages from the right download repository.
> > > >
> > > > ср, 20 февр. 2019 г. в 11:30, Alejandro Ruiz Bermejo <
> > > > arbermejo0...@gmail.com>:
> > > >
> > > > > Hi i'm new with openstack and i'm following the installation guide
> of
> > > the
> > > > > oficial site. Untill now i already  got compiled the source
> packages
> > > and
> > > > > i'm adding the apt repository but when i run the apt-get update i'm
> > > > having
> > > > > this error
> > > > >
> > > > > Ign:1 http://10.8.9.230/cloudstack/repo/binary ./ InRelease
> > > > > Ign:2 http://10.8.9.230/cloudstack/repo/binary ./ Release
> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en_US
> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > Hit:6 http://10.8.11.4/ubuntu xenial InRelease
> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en_US
> > > > > Hit:7 http://10.8.11.4/ubuntu xenial-updates InRelease
> > > > > Hit:8 http://10.8.11.4/ubuntu xenial-backports InRelease
> > > > > Hit:9 http://10.8.11.4/ubuntu xenial-security InRelease
> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en_US
> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en_US
> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
> > > > > Ign:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en_US
> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
> > > > > Err:3 http://10.8.9.230/cloudstack/repo/binary ./ Packages
> > > > >   404  Not Found
> > > > > Ign:4 http://10.8.9.230/cloudstack/repo/binary ./
> Translation-en_US
> > > > > Ign:5 http://10.8.9.230/cloudstack/repo/binary ./ Translation-en
> > > > > Reading package lists... Done
> > > > > W: The repository 'http://10.8.9.230/cloudstack/repo/binary ./
> > > Release'
> > > > > does not have a Release file.
> > > > > N: Data from such a repository can't be authenticated and is
> > therefore
> > > > > potentially dangerous to use.
> > > > > N: See apt-secure(8) manpage for repository creation and user
> > > > configuration
> > > > > details.
> > > > > E: Failed to fetch
> > http://10.8.9.230/cloudstack/repo/binary/./Packages
> > > > > 404  Not Found
> > > > > E: Some index files failed to download. They have been ignored, or
> > old
> > > > ones
> > > > > used instead.
> > > > >
> > > > >
> > > > > Can anyone help me fix this problem
> > > > >
> > > >
> > > >
> > > > --
> > > > With best regards, Ivan Kudryavtsev
> > > > Bitworks LLC
> > > > Cell RU: +7-923-414-1515
> > > > Cell USA: +1-201-257-1512
> > > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > > >
> > >
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>


  1   2   3   >