Hi Jeremy,
I've just had a quick word with Cliffe and will forward the
email to him.
Regards,
Paul S.
From: Jeremy Tourville
Sent: 30 March 2019 18:50
To: users@ovirt.org
Cc: Luca 'remix_tj' Lorenzetto
Subject: [ovirt-users
Sorry about the delay. We did confirm the Jumbo frames. We dropped the
iSCSI and and switched to NFS on FreeNAS before I got your reply. Seems to
have gotten rid of any hiccups. And we like being able to see the files
better than iSCSI anyway.
So we decided that we were very confident it's oVi
Hi,
Started from scratch...
And all the things became more strange. First of all, after adding fqdn
names for both management and gluster interface in /etc/hosts ( ip address
specification for gluster nodes is not possible because of a known bug )
and although i had proper dns resolving for gluste
On Tue, Apr 2, 2019 at 7:56 PM wrote:
> Thanks Arik Hadas for your reply.
>
> And how can i do this regulary and automaticly every day?
>
oVirt does not provide an integrated way to define periodic tasks like that.
So you need, e.g., to set up a cron job that executes a script that
triggers the
Thanks Arik Hadas for your reply.
And how can i do this regulary and automaticly every day?
Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-poli
On Tue, Apr 2, 2019 at 8:14 PM Leo David wrote:
>
> Just to loop in, i've forgot to hit "Reply all"
>
> I have deleted everything in the engine gluster mount path, unmounted the
> engine gluster volume ( not deleted the volume ) , and started the wizard
> with "Use already configured storage".
On Tue, Apr 2, 2019 at 4:57 PM Callum Smith wrote:
> Re-running same config sorted this error... Though we're back here:
>
> - Clean NFS
> - Task run as normal user
> - name: Install oVirt Hosted Engine
> hosts: virthyp04.virt.in.bmrc.ox.ac.uk
> roles:
> - ovirt.hosted_engine_setup
> - No
Just to loop in, i've forgot to hit "Reply all"
I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) , and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume, volume gets mount
-- Forwarded message -
From: Leo David
Date: Tue, Apr 2, 2019, 15:10
Subject: Re: [ovirt-users] Re: HE - engine gluster volume - not mounted
To: Sahina Bose
I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume )
On Fri, Mar 29, 2019 at 4:58 PM wrote:
> Hi,
>
> Actual size 209GiB and Virtual Size 150 GiB on a thin provision disk,
> shows on the engine GUI.
> 30,9 GB of used space, is what I see on the windows machine when I remote
> desktop to it.
>
Is it possible that you have big snapshots in the chain?
Thanks Sahina for including Gluster community mailing lists.
As Sahina already mentioned we had a strong focus on upgrade testing path
before releasing glusterfs-6. We conducted test day and along with
functional pieces, tested upgrade paths like from 3.12, 4 & 5 to release-6,
we encountered probl
Sorry, I can't find this.
Em ter, 2 de abr de 2019 às 09:49, Strahil Nikolov
escreveu:
> I think I already met a solution in the mail lists. Can you check and
> apply the fix mentioned there ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Lean
Glad to hear it!
On Tue, Apr 2, 2019 at 3:53 PM Matthias Leopold
wrote:
>
> No, I didn't...
> I wasn't used to using both "rbd_user" and "rbd_keyring_conf" (I don't
> use "rbd_keyring_conf" in standalone Cinder), nevermind
>
> After fixing that and dealing with the rbd feature issues I could
> p
Thank you for organizing this Sandro! Surveys like this are always great
for helping to inform and improve the user experience of the application.
Also to learn more about the users who are using it.
On Tue, Apr 2, 2019 at 6:31 AM Sahina Bose wrote:
>
>
> On Tue, Apr 2, 2019 at 12:07 PM Sandro B
No, I didn't...
I wasn't used to using both "rbd_user" and "rbd_keyring_conf" (I don't
use "rbd_keyring_conf" in standalone Cinder), nevermind
After fixing that and dealing with the rbd feature issues I could
proudly start my first VM with a cinderlib provisioned disk :-)
Thanks for help!
I'
I think I already met a solution in the mail lists. Can you check and apply
the fix mentioned there ?
Best Regards,Strahil Nikolov
В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro
написа:
Hi, After update my hosts to ovirt node 4.3.2 with vdsm version
vdsm-4.30.11-
Hi, After update my hosts to ovirt node 4.3.2 with vdsm version
vdsm-4.30.11-1.el7
my vms status not update, if I do anything with vm like shutdown, migrate
this status not change , only a restart the vdsm the host that vm is runnig.
vdmd status :
ERROR Internal server error
Is it possible you have not cleared the gluster volume between installs?
What's the corresponding error in vdsm.log?
On Tue, Apr 2, 2019 at 4:07 PM Leo David wrote:
>
> And there it is the last lines on the ansible_create_storage_domain log:
>
> 2019-04-02 10:53:49,139+0100 DEBUG var changed: h
And there it is the last lines on the ansible_create_storage_domain log:
2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n File
\"/tmp/ansible_ovirt_sto
> No need for that; but you will required to redeploy them from the new
> engine to update their configuration.
so I keep the old engine running while deploying the new engine on a different
storage? Curious.
I don't understand what "redeploy them [the old engine hosts] from
the new engine to
I have a storage data center that I can't use. It's a local one.
When I look on vdsm.log:
2019-04-02 10:55:48,336+0200 INFO (jsonrpc/2) [vdsm.api] FINISH
connectStoragePool error=Cannot find master domain:
u'spUUID=063d1217-6194-48a0-943e-3d873f2147de,
msdUUID=49b1bd15-486a-4064-878e-8030c8108
On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola
wrote:
> Thanks to the 143 participants to oVirt Survey 2019!
> The survey is now closed and results are publicly available at
> https://bit.ly/2JYlI7U
> We'll analyze collected data in order to improve oVirt thanks to your
> feedback.
>
> As a fir
Hi,
I have just hit "Redeploy" and not the volume seems to be mounted:
Filesystem Type
Size Used Avail Use% Mounted on
/dev/mapper/onn-ovirt--node--ng--4.3.2--0.20190319.0+1 ext4
57G 3.0G 51G 6% /
devtmpfs
Regarding the IPv6 question:
"If you are using IPv6 for hosts, which kind of addressing you are using"?
"Dynamic or Static?"
I would add 'AutoConf' as well
Thx
Roni
On Tue, Apr 2, 2019 at 10:30 AM Dan Kenigsberg wrote:
> On Tue, Apr 2, 2019 at 9:36 AM Sandro Bonazzola
> wrote:
> >
> > Thanks
On Tue, Apr 2, 2019 at 11:18 AM Callum Smith wrote:
> No, the NFS is full of artefacts - should i be rm -rf the whole thing
> every time?
>
Yes, right.
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e.
Hi Leo,
Can you please paste "df -Th" and "gluster v status" out put ?
Wanted to make sure engine mounted and volumes and bricks are up.
What does vdsm log say?
On Tue, Apr 2, 2019 at 2:06 PM Leo David wrote:
> Thank you very much !
> I have just installed a new fresh node, and triggered the
TASK [ovirt.hosted_engine_setup : Activate storage domain]
**
...
Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP
response code is 400.
usually means that the engine failed to activate that storage domain;
unfortunately engine error mes
On Tue, Apr 2, 2019 at 11:24 AM Simone Tiraboschi wrote:
>
>
>
> On Tue, Apr 2, 2019 at 10:20 AM
> wrote:
>>
>> Thanks for your answer.
>>
>> > Yes, now you can do it via backup and restore:
>> > take a backup of the engine with engine-backup and restore it on a new
>> > hosted-engine VM on a ne
Thank you very much !
I have just installed a new fresh node, and triggered the single instance
hyperconverged setup. It seems it fails at the hosted engine final steps of
deployment:
INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted
On Tue, Apr 2, 2019 at 10:20 AM
wrote:
> Thanks for your answer.
>
> > Yes, now you can do it via backup and restore:
> > take a backup of the engine with engine-backup and restore it on a new
> > hosted-engine VM on a new storage domain with:
> > hosted-engine --deploy --restore-from-file=myback
Thanks for your answer.
> Yes, now you can do it via backup and restore:
> take a backup of the engine with engine-backup and restore it on a new
> hosted-engine VM on a new storage domain with:
> hosted-engine --deploy --restore-from-file=mybackup.tar.gz
That is great news. Just to clear things
On Tue, Apr 2, 2019 at 9:36 AM Sandro Bonazzola wrote:
>
> Thanks to the 143 participants to oVirt Survey 2019!
> The survey is now closed and results are publicly available at
> https://bit.ly/2JYlI7U
> We'll analyze collected data in order to improve oVirt thanks to your
> feedback.
>
> As a f
32 matches
Mail list logo