(and then some more virtio causing BSOD on every other win2008/2012 VM,
yes)
On Tue, 7 Apr 2020 at 17:23, Simon Weller wrote:
> Is the version current? I remember a bug related to this in virtio a
> couple of years ago.
>
> From: Adam Witwicki
> Sent: Tuesda
>> as
good as a collective suicide in the long run
What do you mean?
On Tue, Apr 7, 2020 at 8:48 PM Andrija Panic
wrote:
> Shared mount point is used to "mount" i.e. attached a shared drive
> (enclosure, LUN, etc) to the same mount point on multiple KVM hosts and
> then run a shared file syste
Shared mount point is used to "mount" i.e. attached a shared drive
(enclosure, LUN, etc) to the same mount point on multiple KVM hosts and
then run a shared file system on top (GFS2, OCFS, etc - all of which are as
good as a collective suicide in the long run...)
Stick to local - and you do unders
replan the networking and start from scratch again - otherwise, it's a long
process to fix anything (as you are just testing).
On Tue, 7 Apr 2020 at 17:49, Marc-Andre Jutras
wrote:
> Not recommended mainly because it generate some weird routing problem...
>
> from your test, check your routing
I managed to use Shared Mount Point to for local primary storage but it
does not allow over-provisioning. I guess direct local storage also does
not allow over-provisioning.
On Tue, Apr 7, 2020 at 8:21 PM Simon Weller wrote:
> Ceph uses data replicas, so even if you only use 2 replicas (3 is
> r
Ceph uses data replicas, so even if you only use 2 replicas (3 is recommend),
you'd basically have best case of the IO of a single drive. You also need to
have a minimum of 3 management nodes for Ceph, so personally, I'd stick with
local storage if you're focused on speed.
You're also running q
Not recommended mainly because it generate some weird routing problem...
from your test, check your routing table on your CVM: Same subnet on
different interfaces... ( eth1 and eth2 )
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0
Hello,
I have a single physical host running CloudStack. Primary storage is
currently mounted as a NFS share. The underlying filesystem is XFS
running on top of Linux Soft RAID-0. The underlying hardware consists of 2
SSD-NVMe drives.
The question is that, could I reach faster I/O on VMs if I wo
Is the version current? I remember a bug related to this in virtio a couple of
years ago.
From: Adam Witwicki
Sent: Tuesday, April 7, 2020 10:22 AM
To: users
Subject: Re: Instances lose internet
Yes using virtio for nics and storage
Thanks
Adam
___
Yes using virtio for nics and storage
Thanks
Adam
From: Simon Weller
Sent: Tuesday, April 7, 2020 4:19:59 PM
To: users@cloudstack.apache.org
Subject: Re: Instances lose internet
** This mail originated from OUTSIDE the Oakford corporate network. Treat
hyperlin
Are you using the virtio drivers?
From: Adam Witwicki
Sent: Tuesday, April 7, 2020 1:33 AM
To: users@cloudstack.apache.org
Subject: Instances lose internet
Hello,
I wonder if anyone else has seen this, problerly not cloudstack but a user on
our systems says tha
Yes, is this configuration not allowed?
How could I get around this, as I don't have another routed network.
your public and management networks are in the same range ???
-Public:
Gateway - 172.26.0.1
Netmask - 255.255.255.0
VLAN/VNI- vlan://untagged
vs .
-Management:
Gateway - 172.26.0.1
Netmask - 255.255.255.0
VLAN/VNI- vlan://untagged
On Tue, 7 Apr 2020 at 14:36, F5 wrote:
> Thanks for the feedba
Thanks for the feedback,
Here are the tests performed:
- Tests on the Vm proxy console
root@v-193-VM:~# telnet 172.26.0.209 8250
Trying 172.26.0.209...
Connected to 172.26.0.209.
_
root@v-193-VM:~# /usr/local/cloud/systemvm/ssvm-check.
14 matches
Mail list logo