Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a
write test or rw with certain rw percenrage? Hardly believe the deployment
can do 250k IOs for writting with single VM test.
2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
s.brues...@proio.com> написал:
I am
Hi, I don't see spontaneous reboots yet, but have heard that some people
met them with new intel ucode.
2 февр. 2018 г. 3:53 пользователь "Andrija Panic"
написал:
> Thx Ivan for sharing, no reboot issues because of problematic Intel
> microcode ?
>
> This is just Meltdown fix for now, and btw c
Hi all,
I will be at Fosdem in Brussels this weekend, and I know Daan is going to be
there too - if you're going it would be lovely to meet you and discuss
CloudStack among other things, tweet me @rhtyd.
Cheers.
rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, Lond
I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster with
each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when doing a fio
test (random 4k).
The only problem is that I do not know how to mount the shared volume so that
KVM can use it to store vms on it. Does an
a bit late, but:
- for any IO heavy (medium even...) workload, try to avoid CEPH, no
offence, simply it takes lot of $$$ to make CEPH perform in random IO
worlds (imagine RHEL and vendors provide only refernce architecutre with
SEQUNATIAL benchmark workload, not random) - not to mention a huge lis
Thx Ivan for sharing, no reboot issues because of problematic Intel
microcode ?
This is just Meltdown fix for now, and btw congrats on courage to do so
this early (since no final solution yet).
FYI, CentOS/RHEL has already patched everyhing (kernel and qemu/libvirt)
but we are also on Ubuntu..
in our VMs in reslolv.conf we have internal IP address of VR as first
nameserver, then the public ones... ( use.external.dns set to false on
Zone level - zone level settings)
On 1 February 2018 at 21:16, wrote:
> Hello,
>
> we are using advanced networking
>
>
>
> Andrija Panic писал 2018-02-0
Actually, we have this feature (we call this internally
online-storage-migration) to migrate volume from CEPH/NFS to SolidFire
(thanks to Mike Tutkowski)
There is libvirt mechanism, where basically you start another PAUSED VM on
another host (same name and same XML file, except the storage volume
Hello,
we are using advanced networking
Andrija Panic писал 2018-02-01 23:25:
Hi,
you didn't write what kind of networking you have, are VMs supposed to
use
VR (advanced networking) for DNS (as deafult) or not.
In zone settings, we have set public DNS to google's also, and some
internal o
Hi,
you didn't write what kind of networking you have, are VMs supposed to use
VR (advanced networking) for DNS (as deafult) or not.
In zone settings, we have set public DNS to google's also, and some
internal ones.
SSVM and CPVM are assinged both 2 internal, and then 2 external servers (in
that
Vladimir,
the original error seems as MySQL timeout for sure (I assume because of
HAPROXY in the middle), and we also had this setup originally (MGMT server
using HAPROXY on top of galera nodes...) but this has confirmed to be
issue, no matter what we changed on HAproxy or Mysql, and at that time
Hi Everyone,
I’m cross-posting again…
I think that it would be great if we could have a couple of CloudStack
presentations, but maybe we could pool a user’s story with development piece to
showcase CloudStack from both perspectives.
Any one up for this?
paul.an...@shapeblue.com
www.shapeblu
Thanks a lot, any help will be so much appreciated!
On Wed, Jan 31, 2018 at 05:23:25PM +, Nux! wrote:
> It's possible there are timeouts being hit somewhere. I'd take this to dev@
> to be honest, I am not very familiar with the ssvm internals.
>
> --
> Sent from the Delta quadrant using Borg
No, the primary storage is local (in this case), but the primary storage isn't
being involved, as I'm creating a template from a snapshot which resides on a
secondary storage.
The snapshot's size is ~300GB.
On Wed, Jan 31, 2018 at 06:18:43PM +, Simon Weller wrote:
> Is your primary storage
14 matches
Mail list logo