Hi,
We are trying to install Ceph Cluster on Cent OS7 Guests on oVirt. We
are receiving many errors and it is unable to create master or the node.
Has anyone tried to deploy Ceph Cluster on CentOS 7 Guest in oVirt?
--
Thanks & regards,
Anantha Raghava
Do not print this e-mail unless requir
Hey guys!
I got a wierd issue today doing 'yum upgrade' I'm hoping you can help
me with. The transaction failed because of systemd packages. I can't
update systemd packages because they're already the latest, but old
versions are still around because of ovirt related dependencies:
# rpm -q system
I'm wondering if anyone has any tips to improve file/directory operations
in HCI replica 3 (no arbtr) configuration with SSDs and 10Gbe storage
network.
I am running stock optimize for virt store volume settings currently and am
wondering what if any improvements I can make for VM write speed and
I was able to use the brick reset process to change hostnames of all
gluster volumes. Traffic is now flowing through my 10gig gluster network.
Thanks for the assistance with this all!
On Thu, Oct 17, 2019 at 2:42 PM Strahil wrote:
> For example my gluster network IP is 10.10.10.1/24 and the /et
For example my gluster network IP is 10.10.10.1/24 and the /etc/hosts entry is:
10.10.10.1 gluster1.localdomain gluster1
Then I did 'gluster volume replace-brick ovirt1:/gluster_bricks/data/data
gluster1:/gluster_bricks/data/data commit'
So you use a hostname that is resolved either by DNS or
Sahina,
Thank you very much for the explanation. I definitely do want Gluster
traffic on my 10gig network but am being extra cautious because there are
live VMs on the volumes.
This is my current configuration
host0.blah.example.com: 10.11.0.220 (gluster interface: 10.12.0.220)
host1.blah.examp
On Thu, Oct 17, 2019 at 7:22 PM Jayme wrote:
> Thanks for the info, but where does it get the new hostname from? Do I
> need to change the actual server hostnames of my nodes? If I were to do
> that then the hosts would not be accessible due to the fact that the
> gluster storage subnet is isol
Thanks for the info, but where does it get the new hostname from? Do I
need to change the actual server hostnames of my nodes? If I were to do
that then the hosts would not be accessible due to the fact that the
gluster storage subnet is isolated.
I guess I'm confused about what gdeploy does dur
The reset-brick and replace-brick affects only one brick and notifies the
gluster cluster that a new hostname:/brick_path is being used.
Of course, you need a hostname that resolves to the IP that is on the storage
network.
WARNING: Ensure that no heals are pending as the commands are wiping th
Hi Dominik Holler,
Thank you by your reply.
After spent more time I discovered the problem. I as using interface for
ovirtmgmt with bond mode 0, after change the bond mode to 1 the deploy it was
executed with sucess.
Thank you again!
Regards
Carlos
What does the reset brick option do and is it safe to do this on a live
system or do all VMs need to be brought down first? How does resetting the
brick fix the issue with gluster peers using the server hostnames which are
attached to IPs on the ovirtmanagement network?
On Thu, Oct 17, 2019 at 4:
Check why the sanlock.service reports no pid.
Also check the logs of the broker and agent located at
/var/log/ovirt-hosted-engine-ha
You might have to increase the verbosity of the broker and agent.
Best Regards,
Strahil NikolovOn Oct 17, 2019 08:00, adrianquint...@gmail.com wrote:
>
> Strahil
On Wed, Oct 16, 2019 at 8:38 PM Jayme wrote:
> Is there a way to fix this on a hci deployment which is already in
> operation? I do have a separate gluster network which is chosen for
> migration and gluster network but when I originally deployed I used just
> one set of host names which resolve
"Host host1.example.com cannot access the Storage Domain(s) attached to the
Data Center Default-DC1."
Can you check the vdsm logs from this host to check why the storage domains
are not attached?
On Thu, Oct 17, 2019 at 9:43 AM Strahil wrote:
> Ssh to host and check the status of :
> sanlock.s
14 matches
Mail list logo