On Thu, Aug 23, 2018 at 10:19 AM Gianluca Cecchi <gianluca.cec...@gmail.com> wrote:
> On Wed, Aug 22, 2018 at 5:44 PM Simone Tiraboschi <stira...@redhat.com> > wrote: > >> >> >>> Thanks for the answer, Simone, but I have not understood. >>> I used ovirt-node-ng-installer-ovirt-4.2-2018082006.iso to install the >>> OS host node. >>> Does this resemble what you call "new node zero deployment flow"? >>> When connecting to cockpit and selecting hosted engine, I get this >>> screen: >>> >>> https://drive.google.com/file/d/1aPLCm0KW5IIp6f7xr41cysckPkFgdHtJ/view?usp=sharing >>> >>> what to choose and if any particular setup needs? >>> As I wrote, the command "gdeploy -c gdeploy.conf" completed without >>> errors on the host and now I have the 3 (small, only to test the workflow >>> for now) gluster volumes running >>> >> >> Now you can simply continue with "Hosted Engine" button in cockpit. >> You could also skip the manual execution of gdeploy from CLI and do >> everything from cockpit with the "Hyperconverged" button but now you >> already did it. >> >> > Thanks for your time, Simone. > The installation went ok until the problem below due to my engine gluster > volume too small (I set up 60Gb for it and it didn't fit for some MBs..). I > got: > > [ INFO ] TASK [Check storage domain free space] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error: > the target storage domain contains only 59.0GiB of available space while a > minimum of 60.0GiB is required If you wish to use the current target > storage domain by extending it, make sure it contains nothing before adding > it."} > > Then I scratched (it is a small nested test and so I simply discarded the > changes) and updated gdeploy.conf to accomodate 65Gb for the engine gluster > volume and all went pretty smooth then. > I also noticed that the engine VM and its storage domain is already > visible in web admin portal, so there is no immediate need to add another > data domain and get hosted engine imported as before, correct? > Yes, as expected with the new flow. Theoretically you can just end with a single SD for hosted-engine and regular VMs but keep them distinct is highly recommended for maitenability. > > I'm up and running and I'm going to scratch again and test your suggestion > to bypass also the gdeploy.conf step and go all with gui. > In fact I notice that when configuring storage there is a note saying > " > Please note that only replica 1 and replica 3 volumes are supported. > " > > .. I'll let you know if it goes ok the all gui setup. > > In the mean time one further question: it seems that the cluster > configured has not the "gluster service" checkbox enabled, so I think it is > for that reason that I don't see any gluster related configuration > information inside the web admin portal.... > Is this expected with this approach? > Is something different expected when trying instead the full gui approach? > Will the web admin portal aware of the gluster config in that case? > Yes, in that case hosted-engine-setup will be aware of the hyperconverged configuration and it will automatically configure the cluster for the gluster server adding also the two additional hosts. > > Also because a further step I would like to try is to extend this initial > setup to a 3-node based gluster one... > > Thanks again, > Gianluca > > > > > >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQ5CKCPJ6SIONSWFB525M5L6R2RVXBBB/