Another way to deal with it is to use KVM agent hooks:
https://github.com/apache/cloudstack/blob/8f6721ed4c4e1b31081a951c62ffbe5331cf16d4/agent/conf/agent.properties#L123
You can implement the logic in Groovy to modify XML during the start to
support extra devices out of CloudStack management.
However, it could be a great option to have Cloudstack as an orchestration
mechanism for Proxmox working on Proxmox tooling without highly obscure and
hard-to-troubleshoot Cloudstack router, savm, cpvm. Just dreaming :)
Вт, 21 нояб. 2023 г. в 16:59, Ivan Kudryavtsev :
> Cloudstack has cus
ay to do, you have to create
> a new compute offering and then apply to a VM. If you need frequently to
> change this value, i think is a nightmare
>
> Cloudstack is good for mass creating vm with same spec. Proxmox is to
> configure it one by one.
>
>
>
>
>
>
Hi, no problem at all.
Вт, 21 нояб. 2023 г. в 16:30, Gary Dixon :
> I believe Windows based VM's in Proxmox have an issue on booting up
> properly when on KVM hosts. We are also seeing this in Cloudstack
>
>
> Gary Dixon
> Senior Technical Consultant
> 0161 537 4980 <0161%20537%204980>
> +44
Hi, the qcow2 volumes may be thin, sparse, full. Maybe the fs is slow, so
while the image does cow allocations, the performance degrades? Try to make
a full qcow2 image by:
qemu-img create -o preallocation=full -f qcow2
And check again.
ср, 7 сент. 2022 г., 16:26 Mevludin Blazevic :
> Hi all,
just can't open them in the
> web console
>
> Best regards,
>
>
> On Tue, Sep 6, 2022 at 5:04 PM Ivan Kudryavtsev wrote:
>
> > I met the out of space situation with similar symptoms.
> >
> > вт, 6 сент. 2022 г., 19:02 Bs Serge :
> >
> > > Thes
I met the out of space situation with similar symptoms.
вт, 6 сент. 2022 г., 19:02 Bs Serge :
> These are logs inside the console system VM
>
>
> https://paste.0xfc.de/?1d875169513dda9e#gSKvHctUwr5je9DLCpUCxizgMkDoW4DK6jf4GXnP32k
>
> Best Regards,
>
>
> On Tue, Sep 6, 2022 at 4:54 PM Bs Serge
Hi, have some experience with customizing DNS in ACS, but don't get what do
you try to achieve?
ср, 4 мая 2022 г., 5:47 PM Ricardo Pertuz :
> Hi all,
>
> Is there any effort to enable adding custom dns registers to dnsmasq on
> the VR?
>
> BR,
>
> Ricardo
>
}
> virtual_ipaddress {
> 10.231.4.112/24
> }
> track_script {
> check_backend
> }
> }
>
> Best Regards,
> Jayanth
>
> On Tue, May 3, 2022 at 12:33 PM Ivan Kudryavtsev wrote:
>
> > Sounds cool,
> >
> > Just ensure that in
May 2, 2022 at 10:48 AM Ivan Kudryavtsev wrote:
>
> > Hi, I use MariaDB Galera cluster.
> >
> > But you have to pin all the CS management to the same galera node to make
> > cloudstack transactioned operations work correctly. HAproxy or shared
> > common ip sol
Hi, I use MariaDB Galera cluster.
But you have to pin all the CS management to the same galera node to make
cloudstack transactioned operations work correctly. HAproxy or shared
common ip solve that.
пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy :
> Hello guys,
>
> How are you doing database
Hi, you can do that safely. Maybe restarting agents is required, but
nothing that stops the service.
On Wed, Feb 9, 2022 at 2:51 PM Edward St Pierre
wrote:
> Hi,
>
> I have an existing KVM cluster and am looking to enable local storage for
> certain workloads.
>
> Is it safe to enable this on
Hi,
Take a look at Cyclops (https://cyclops-billing.readthedocs.io/en/latest/).
Actually, the integration with any existing billing solution can be done in
days.
On Sun, Jan 23, 2022 at 11:30 PM Saurabh Rapatwar
wrote:
> I guess there is no open source solution but at reasonable price you can
Hi, look at CPU steal time percent. It works for KVM, at least. It should
go to 50% if you push your capped core to 100%.
чт, 20 янв. 2022 г., 17:42 Дикевич Евгений Александрович <
evgeniy.dikev...@becloud.by>:
> Hi all!
>
> ACS 4.16 + XCP-NG 8.2
>
> Maybe someone can me explain how CPU cap
Local storage gives the simplest design and the most predictable behavior.
The live migration is often overrated while host crashes are pretty rare.
We have servers with 500+ days of operation running Cloudstack. So... it's
just fine, at least with KVM.
вт, 21 дек. 2021 г., 17:37 Gabriel Bräscher
Looks like a joke. HDFS is not an FS you WANT or CAN use for VM
filesystems.
Its architecture is completely different from what POSIX-compliant OS wants
to keep QCOW2 images (or even RAW images).
If you want fault-tolerant FS, use Ceph or Gluster or even NFS over DRBD.
NFS is a stateless
Take a look at this mr:
https://github.com/apache/cloudstack/pull/3839/files
сб, 2 окт. 2021 г., 10:09 Ivan Kudryavtsev :
> Hi, it can be done with cloudstack agent hooks implemented in groovy, but
> takes some coding and design.
>
> пт, 1 окт. 2021 г., 22:04 James Steele :
>
>
Hi, it can be done with cloudstack agent hooks implemented in groovy, but
takes some coding and design.
пт, 1 окт. 2021 г., 22:04 James Steele :
> Hi all,
>
> we have added some Ubuntu 20.04 hosts which have an AMD ATI Radeon Pro WX
> 5100 fitted inside.
> We would like to passthrough the Radeon
on NFS and no
> issues at all.
> >
> > Just be sure to select the right vendor and size it correctly.
> >
> >
> >
> >
> > -Original Message-
> > From: Ivan Kudryavtsev
> > Sent: 23 September 2021 11:02
> > To: users
> > Subje
Abishek,
NFS over a bunch of drives works just fine but has no means for failover
(out of the box, when self-built). If your benchmark shows enough IO
performance per VM, then NFS is just the way to go.
Keep in mind that NFS can have various backing store technologies like
NetApp appliances,
e VM's to have same CPU as the host
> > machines Will it be possible?
> >
> > Thank You.
> >
> > On 2021/09/13 07:55:39, Ivan Kudryavtsev wrote:
> > > That is just fine. Go ahead, it's not an error.
> > >
> > > On Mon, Sep 13, 2021 at 2:24 PM avi
ocumented. I want the VM's to have same CPU as the host
> machines Will it be possible?
>
> Thank You.
>
> On 2021/09/13 07:55:39, Ivan Kudryavtsev wrote:
> > That is just fine. Go ahead, it's not an error.
> >
> > On Mon, Sep 13, 2021 at 2:24 PM avi wrote:
> >
That is just fine. Go ahead, it's not an error.
On Mon, Sep 13, 2021 at 2:24 PM avi wrote:
> Hello All,
>
> I am using cloudstack 4.15.1 with KVM host. I was playing with changing
> guest cpu model and tested out
> host-passthrough and host-model but I was unable to succed. I changed the
>
Glusterfs works fine as a shared mountpoint. No NFS and other stuff
required. Just mount them everywhere and you are good to go. Performance is
acceptable (at least for bunch of ssd), but not comparable with local
storage, of course. Not recommend for IO intensive. VM. We recommend such
VMs for
Hi, the actual error is earlier (above) the mentioned log part. Please
provide about 100 lines before.
On Thu, Sep 2, 2021 at 2:55 PM technologyrss.mail <
technologyrss.m...@gmail.com> wrote:
> *Hi,*
>
> I setup Advanced zone using *ACS v4.15.1* but can't instance properly. I
> attached ACS log
).
> We have also already thought about a very similar workaround as you have
> described, but with SharedMountPoint (instead of NFS) in
> Single-Node-Clusters ... those the storage-pool would be direct used as
> local filesystem/folder by the host ^^
>
> regards,
> Michael
Hi, you cannot have multiple local storages for a single host (however it
would be great if it is supported, but not yet).
There is one real-life workaround basically:
1. you add a single host to a single cluster (e.g. C1)
2. you export storage pools from the host as NFS filesystems with scope
Hi. Just set 'HA Enabled' for service offerings, which are used for vm with
auto launch capability.
вс, 30 мая 2021 г., 19:21 Bs Serge :
> Hello, good community,
>
> Is there a way to automatically start instances when the host server is UP?
>
> The SystemVMs are UP automatically!
>
> Centos8
>
Well, it could be like that, other way is only to fix it thru db, but it's
not a supported way.
вс, 3 янв. 2021 г., 15:55 Hean Seng :
> i tried destry it. but when re-built back, it getting the same IP
>
> On Sun, Jan 3, 2021 at 4:46 PM Ivan Kudryavtsev wrote:
>
> > Hi,
Hi, just destroy them.
вс, 3 янв. 2021 г., 14:12 Hean Seng :
> Hi
>
>
> Is there any way to change IP for ConsoleProxy or SecondaryVM ?
>
> In the SystemVm Detail Page, I did not see any place to Change IP .
>
>
> --
> Regards,
> Hean Seng
>
password pre-assigned, right?
>
> Which piece of code is responsible for password/key reset, is it
> cloud-init? or is there any other involved part.
>
> I will try to workout a fix and report to the template owner.
>
> Regards,
> Rafael
>
> On Mon, 2020-11-23 12:32 AM,
Hi. It looks like an improperly crafted template, not a ACS issue.
пн, 23 нояб. 2020 г., 02:18 Rafael del Valle :
> Hi Hean,
>
> Mystery solved.
>
> The template comes with Password Enabled in SSH server. And debian user
> has a default password: "password".
>
> Assigning the SSH key only added
Hi, Cloudstack heavily relies on lock/unlock functionality.
Galera cluster is fine, but as Andrija says, there is a single read/write
node for all cs management must be used.
чт, 12 нояб. 2020 г., 20:31 Andrija Panic :
> As long as you use a single node for writes and reads - yes.
> Some users
Hi Hean,
I've never tried pNFS, but the problem is the same. If you want failover
and hyper scaling, then use Gluster or Ceph. Why would you use PNFS which
is used by almost nobody?
People use NFS because:
1. it's primitive
2. it's easy to manage
3. it supports migrations
4. if planned well
Hi, hypervisor restrictions configured for SO allows limiting iops, bps for
NFS as well as for another storage, because they are enforced by qemu.
ср, 14 окт. 2020 г., 01:53 Hean Seng :
> Hi
>
> Do anybody know NFS implementation of Primary storage can support QOS for
> IOPs in Services Offering
or VM i-2-81-VM
>
> 2020-09-28 03:53:36,858 WARN
> [resource.wrapper.LibvirtSecurityGroupRulesCommandWrapper]
> (agentRequest-Handler-3:null) (logid:74058678) Failed to program default
> network rules for vm i-2-81-VM
>
>
>
>
> On Mon, Sep 28, 2020 at 4:34 PM Ivan
/lib/sysctl.d/00-system.conf
>
> # Enable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables
> = 1
>
> And it has been done too.
>
>
>
> On Mon, Sep 28, 2020 at 4:05 PM Ivan Kudryav
ion skipped.
>
> 2020-09-28 03:04:53,083 WARN [kvm.resource.LibvirtKvmAgentHook]
> (agentRequest-Handler-5:null) (logid:4f23845b) Groovy script
> '/etc/cloudstack/agent/hooks/libvirt-vm-state-change.groovy' is not
> available. Transformations will not be applied.
>
> O
This just means you installed it in the wrong way. Ebtables and Iptables
must be filled with rules like
-A i-6242-10304-def -m state --state RELATED,ESTABLISHED -j ACCEPT
-A i-6242-10304-def -p udp -m physdev --physdev-in vnet18
--physdev-is-bridged -m udp --sport 68 --dport 67 -j ACCEPT
-A
To precise, look at:
iptables-save
ebtables -t nat -L
https://github.com/apache/cloudstack/blob/master/scripts/vm/network/security_group.py
пт, 25 сент. 2020 г., 16:58 Hean Seng :
> Ok. Let me look on that.
>
> Thank you
>
> On Fri, Sep 25, 2020 at 4:36 PM Ivan Kudryavtsev wrote
are Guest network not able to use
> securitygroup right ?
>
>
> Actually the better way is inject proper CloudInit value to set IP instead
> of DHCP.
>
>
>
>
>
> On Fri, Sep 25, 2020 at 4:12 PM Ivan Kudryavtsev wrote:
>
> > Hi,
> >
> > no way. S
Hi,
no way. Security groups block illegal dhcp servers.
пт, 25 сент. 2020 г., 13:20 Hean Seng :
> Hi
>
> Cloudstack use DHCP to allocate IP to VM.
>
> Do you all have issue if some of the VM in same Network, if other VM
> accidentally announce other DHCP , and it affect the deployment of IP of
I use gluster on ssd r5 (two replicas + arbiter) with ACS for those who
need VM HA. Works fine, but I doubt it will work fine for HDD RAID5, as it
is only for linear workloads without bbu, and other tricks.
пт, 24 апр. 2020 г., 21:15 :
> Hi,
>
> I would not use Gluster in production for VM
No way. Just read the basics of SMP, MPP, NUMA computing.
сб, 22 февр. 2020 г., 15:15 Cloud Udupi :
> Hi all,
> We are looking for a solution to combine the CPU Cores and RAM of 3 Servers
> to meet our requirement for a VM in ACS which is used for heavy workload.
>
> *We are using ACS 4.13 with
This is s nice improvement.
ср, 19 февр. 2020 г., 15:41 Rohit Yadav :
> All,
>
> Many list APIs, such as the listRouters API, accept a `listall` parameter
> as well as a `projectid` parameter. Currently, on calling a list API with
> listall=true and projectid=-1 it only returns resources
You have to deploy HA NFS outside Cloudstack. CS doesn't care about storage
fault tolerance.
Gluster is fine (shared mountpoint), Ceph is fine too, HA Nfs can be
deployed with certain approaches or with proprietary appliances.
сб, 1 февр. 2020 г., 19:28 Cloud Udupi :
> Hi,
> We are new to
work on seems to have more
> potential. Do you see it can support my use case?
>
> Thanks, Sakari
>
>
>
> On Fri, Jan 24, 2020 at 6:08 PM Ivan Kudryavtsev wrote:
>
> > Sakari, looks like you are looking for this one:
> > https://github.com/apache/clouds
Sakari, looks like you are looking for this one:
https://github.com/apache/cloudstack/pull/3510
Also, Im working on implementation, which handles it another way:
https://github.com/apache/cloudstack/issues/3823
пт, 24 янв. 2020 г., 17:45 Sakari Poussa :
> Hi,
>
> Is there a way to pass extra
s) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachment
Hello, community,
I recorded a quick video that demonstrates how CSUI can be used to
demonstrate an integrated template which rolls docker-compose passed
through VM UserData, tracks the deployment with special in-VM tracking
script install-monitor and enables simplified access to web-tracking
Disk allocated != Disk used. It's for how much all volumes will span when
their thin provisioning optimization stops and they fully use the space.
вт, 17 сент. 2019 г., 13:00 Piotr Pisz :
> Hi all,
>
> I have a strange situation, we have a CephFS share mounted as
> SharedMountPoint.
> CS shows
Virtio scsi is detected as sdX. It's absolutely fine.
пн, 2 сент. 2019 г., 19:32 Andrija Panic :
> lspci inside that OS ?
>
> On Mon, 2 Sep 2019 at 14:25, Fariborz Navidan
> wrote:
>
> > Hi,
> >
> > XML says it is using virtio-scsi but guest detects disk device at
> /dev/sda
> >
> >
Even when no SGs used, the agent still creates iptables/ebtables rules and
should block mac/ip spoofing, wrong dhcp announces. Im not sure how it
works in the current CS version, but believe it:
- either local bug which must be investigated thru agent logs and
iptables/ebtables dumps
- cs bug
option? Between this and local
> NFS which one offers better performance?
>
> Thanks
>
> On Thu, Jul 18, 2019 at 5:38 AM Ivan Kudryavtsev >
> wrote:
>
> > Hi,
> >
> > As for 4.11.2, no way to have multiple local storages configured for a
> > single
Hi,
As for 4.11.2, no way to have multiple local storages configured for a
single host. There is no simple way to overcome it. The only one I see is a
pretty ugly - locally mounted NFS, created as a cluster wide storage when
only a single host added to a single cluster...
In short, it's not
, ext4 would be the
> way to go.
>
> On Mon, Jul 15, 2019 at 1:23 PM Ivan Kudryavtsev >
> wrote:
>
> > Hi,
> >
> > if you use local fs, use just ext4 over the required disk topology which
> > gives the desired redundancy.
> >
> > E.g. JBOD,
Hi,
if you use local fs, use just ext4 over the required disk topology which
gives the desired redundancy.
E.g. JBOD, R0 work well when data safety policy is established and backups
are maintained well.
Otherwise look to R5, R10 or R6.
пн, 15 июл. 2019 г., 18:05 :
> Isn't that a bit apples
> exported using libvirt exporter
>
> Sent from my iPhone
>
> > On 28-Jun-2019, at 3:51 PM, Ivan Kudryavtsev
> wrote:
> >
> > Hi. It's easily done with Zabbix. Whole variety of underlying topologies
> is
> > too difficult to monitor with prebuilt monitoring
Hi. It's easily done with Zabbix. Whole variety of underlying topologies is
too difficult to monitor with prebuilt monitoring system... anyway, you can
code it!
пт, 28 июн. 2019 г., 20:30 Fariborz Navidan :
> Hello All,
>
> Does ACS provide a way to monitor a host's network bandwidth (RX/TX) and
Daniel,
why you think you should be able to ping from the hypervisor? Normally, you
have to add ip in the same net to the bridge to ping tun/tap. I'm not sure,
but if it doesn't work after the step above, check your iptables/ebtables
to check every rule is ok.
Commands:
Iptables-save
Ebtables -t
Alejandro, what hv libvirt logs show?
вт, 28 мая 2019 г., 4:49 Alejandro Ruiz Bermejo :
> Hi,
> I have installed cloudstack 4.11.2 on ubuntu 16 with one management server
> and one compute node with kvm.
>
> I want to add a new compute node with a different hipervisor (lxc) on
> another server
t; Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Ivan Kudryavtsev"
> > To: "users" , "dev" <
> d...@cloudstack.apache.org>
> > Sent: Friday, 17 May, 20
17 мая 2019 г., 20:04 Ivan Kudryavtsev :
> Well, just FYI, I changed cache_mode from NULL (none), to writethrough
> directly in DB and the performance boosted greatly. It may be an important
> feature for NVME drives.
>
> Currently, on 4.11, the user can set cache-mode for disk o
., 19:30 Ivan Kudryavtsev :
> Darius, thanks for your participation,
>
> first, I used 4.14 kernel which is the default one for my cluster. Next,
> switched to 4.15 with dist-upgrade.
>
> Do you have an idea how to turn on amount of queues for virtio-scsi with
> Cloudstack?
>
. You
> should switch kernel if bugs are still there with 4.15 kernel.
>
> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev
> wrote:
> >
> > Hello, colleagues.
> >
> > Hope, someone could help me. I just deployed a new VM host with Intel
> P4500
> >
Host is Dell r620 with Dual e5-2690/256GB 1333 DDR3.
пт, 17 мая 2019 г., 19:22 Ivan Kudryavtsev :
> Nux,
>
> I use Ubuntu 16.04 with "none" scheduler and the latest kernel 4.15. Guest
> is Ubuntu 18.04 with Noop scheduler for scsi-virtio and "none" for virtio.
&
ght tuned profile? What about
> in the guest? Which IO scheduler?
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Ivan Kudryavtsev"
> > To: "users"
> > Sent: Fri
see now, is that it works slower than couple of two Samsung 960
PRO which is extremely strange.
Thanks in advance.
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
rd Persaud
> Sys Spec, Info Security Del | Macy's, Inc.
> 5985 State Bridge Rd. | Johns Creek, GA 30097
> Office: 678-474-2357
> https://macyspartners.com/PublishingImages/MakeLifeShineBrighter.png
>
>
> From: Ivan Kudryavtsev
> Sent: Thursd
Richard, the most probable problem is with bridge devices. Management
server doesn't care about systemvm. The only unit which cares - ssvm and
hypervisor. Also, if you are using naive RAID/NFS within one cluster when
any HV can mount any storage (mesh) it's extremely bad idea. You will get s
lot
Peter,
once you provisioned a single VM for certain template, it's no longer
copied from ss to primary, then boot is almost instant.
In our case, I can read from SS 800 MBs and write to primary on 1GBs,
template copy for normal templates takes only 2-3 seconds. Again, when it
is copied once,
any changes
> on these templates were done - are your records completely missing or just
> altered in bad way ?
>
> Can you double check the API log for any delete template API calls ?
>
>
> On Mon, 15 Apr 2019 at 17:39, Ivan Kudryavtsev
> wrote:
>
> > Andrija,
>
for the restoration of the template.properties - do you have a backup ?
>
> Best,
> Andrija
>
> On Mon, 15 Apr 2019 at 16:26, Ivan Kudryavtsev
> wrote:
>
> > To follow up. When SSVM boots it tries to redownload all the templates
> from
> > original sources this leads to
tries to download all the templates on SSVM again?
Never seen that before.
пн, 15 апр. 2019 г. в 09:40, Ivan Kudryavtsev :
> Hello, community.
>
> Today, We've met the problem with ACS SS, which looks like a critical
> error. In some point of time, new templates stopped to upload
y.
Is there a way to recreate "template.properties" from DB or another
approach? All the templates are still in place, but they are not activated
upon SSVM start.
Many thanks.
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
using Packer and Cloudstack is supported. But I cannot find a way
> to use a preseed file to create a Debian/Ubuntu template.
> Thanks for any help!
>
> cu Swen
>
>
>
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW
Investigate the stacktrace in logs, share all related when you'll find.
пт, 29 мар. 2019 г., 10:19 Fariborz Navidan :
> Not for me! I think my CS DB is inconsistent somehow! I cannot figure out
> which tables should I manipulate to fix it.
>
> On Fri, Mar 29, 2019 at 6:45 PM Ivan
VR deletion is OK for Basic Zone in 4.11.2, work normally, VR is created
automatically.
пт, 29 мар. 2019 г., 10:04 Fariborz Navidan :
> There should be inconsistency in DB. Because I did a wrong before and have
> deleted all records in domain_router and router_network_ref tables
> manually. I
es
> > >> > > or your kernel needs to be upgraded.
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt dnsmasq[12206]: read
> > >> > > /etc/hosts
> > >> > > - 4 addresses
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt dnsmasq[12206]: read
> > >> > > /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt dnsmasq-dhcp[12206]:
> > read
> > >> > > /var/lib/libvirt/dnsmasq/default.hostsfile
> > >> > > Mar 21 11:45:01 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:45:01.354+: 566: warning : virSecurityManagerNew:189 :
> > >> Configured
> > >> > > security driver "none" disables default policy to create confined
> > >> guests
> > >> > > Mar 21 11:49:57 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:49:57.354+: 542: warning : qemuDomainObjTaint:7521 : Domain
> > >> id=2
> > >> > > name='s-1-VM' uuid=1a06d3a7-4e3f-4cba-912f-74ae24569bac is
> tainted:
> > >> > > high-privileges
> > >> > > Mar 21 11:49:59 mtl1-apphst03.mt.pbt.com.mt libvirtd[537]:
> > 2019-03-21
> > >> > > 11:49:59.402+: 540: warning : qemuDomainObjTaint:7521 : Domain
> > >> id=3
> > >> > > name='v-2-VM' uuid=af2a8342-cd9b-4b55-ba12-480634a31d65 is
> tainted:
> > >> > > high-privileges
> > >> > >
> > >> > >
> > >> > > What can be done about that ?
> > >> > >
> > >> >
> > >> >
> > >> > --
> > >> >
> > >> > Andrija Panić
> > >> >
> > >>
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
Jevgeniy, it may be a documentation bug. Take s look:
https://github.com/apache/cloudstack-documentation/pull/27/files
вт, 19 мар. 2019 г., 9:09 Jevgeni Zolotarjov :
> That's it - libvirtd failed to start on second host.
> Tried restarting, but it does not start.
>
>
> >> Do you have some NUMA
technically possible to modify cpu setings without redeploying
> > from template?
> >
>
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
-is-out.html
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
r LVMcache, add more drives, use SSD,
NVMe and forget about the games with stuff above.
ср, 13 мар. 2019 г. в 11:19, Fariborz Navidan :
> Hello All,
>
> How does disk caching affects the VM's performance? If yes, which type of
> disk caching do you advise?
>
> Thanks
>
--
Wit
Konstantin,
in general, this feature is very closely coupled with VM live migration,
which is usually undesired and run under human control, and the
implementation depends a lot of the compaction policy used in cloud...
Actually it can be implemented easily outside the cloudstack.
Personally, I
9 at 6:47 PM Fariborz Navidan
> wrote:
>
> > Hello
> >
> > This is host's cpu resource statistics:
> >
> > CPU Utilized: 30.2%
> > CPU Allocated for VMs: 84.82%
> >
> > On Sat, Mar 9, 2019 at 4:20 PM Ivan Kudryavtsev <
> kudryavtsev..
AFAIK, the last CS compatible with container service is 4.6 or smth like
that.
сб, 9 мар. 2019 г., 9:04 Konstantin :
> Hello!
>
> I have following the installation guide here
>
> http://downloads.shapeblue.com/ccs/1.0/Installation_and_Administration_Guide.pdf
>
> When I trying to create my first
t; at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:150)
> > at
> >
> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
> > at
> >
> >
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJob
Hi Fariborz, there are no prophets here, we are just humans. Find relevant
logs and post here for a review.
пт, 8 мар. 2019 г., 11:00 Fariborz Navidan :
> Hello,
>
> I get the following error when creating new VM on a KVM cluster. Unable to
> create a deployment for VM[...]
>
> Please help me
>
Avoid HV caching when possible, better to add RAM to VM or manage in-VM
writeback settings. Going with HV writeback you end up with angry users
someday, who lost much more data than you can imagine even if you don't use
migrations at all.
Want faster operations - improve your storage, build R0
No, you don't. Only vlan support may be required, but depends on chosen
model.
Ignacio, basic is planned for the removal because 'advanced shared with sg'
does the same. So it is just duplicated functionality. That is why it will
be removed. If you are not in prod, use the option above to ensure better
future compatibility.
вс, 3 мар. 2019 г., 15:03 Ignacio Ocampo :
> Hi
ned | NO | | 0
> ||
> | deployment_planner | varchar(255) | YES | | NULL
> ||
>
> ++--+------+-+-----++
> 14 rows in set (0.00 sec)
>
> regards,
> Thomas
>
;> > I am wondering how the cpu time usage is calculated for a VM. Is it in
> >> per
> >> > core basis or the total fraction of cpu a vm can use. For example,
> when
> >> we
> >> > set 2 cores and 2000 MHz, the VM receives total of 2000MHz of 4000MHz
> >> > processing power?
> >> >
> >> > Thanks
> >> >
> >>
> >
>
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
Actually, it works like everyone expects. In case of KVM you can just take
a look at running instance with ps xa. But, I don't recommend setting CPU
cap, though... VM will experience CPU steal, and users will not be happy.
Better to deploy nodes with low cpu frequency and many cores.
Without
gt;
> Thanks
>
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
torage Array level snapshots in place as a safety net...
>
> Thanks!!
> Sean
>
> -----Original Message-
> From: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Sent: Sunday, January 27, 2019 7:29 PM
> To: users ; cloudstack-fan <
> cloudstack-...@protonmail
ud.username=cloud
> db.cloud.password=cloud
> db.cloud.host=localhost
> db.cloud.driver=jdbc:mysql
> db.cloud.port=3306
> db.cloud.name=cloud
>
--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>
.html
> > >
> > > It is why i'm asking
> > >
> > > what other way do i have
> > >
> > > On Wed, Feb 20, 2019 at 12:05 PM Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > wrote:
> > >
> > > >
. 2019 г. в 12:11, Ivan Kudryavtsev :
> This looks strange to me. I always use repo
>
> root@cs2-head1:~# cat /etc/apt/sources.list.d/cloudstack.list
> deb http://cloudstack.apt-get.eu/ubuntu xenial 4.11
>
> and it works just fine.
>
> ср, 20 февр. 2019 г. в 12:08
following the instrucions of the oficial guide at
> http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/index.html
>
> It is why i'm asking
>
> what other way do i have
>
> On Wed, Feb 20, 2019 at 12:05 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> wrote:
>
1 - 100 of 284 matches
Mail list logo