You can't add the new volume as it contains the same data (UUID) as the old one
, thus you need to detach the old one before adding the new one - of course
this means downtime for all VMs on that storage.
As you see , downgrading is more simpler. For me v6.5 was working, while
anything above
Another question
What version could I downgrade to safely ? I am at 6.9 .
Thank You For Your Help !!
On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
wrote:
> You are definitely reading it wrong.
> 1. I didn't create a new storage domain ontop this new volume.
> 2. I used cli
>
> Something
Here is what I did to make my volume
gluster volume create imgnew2a replica 3 transport tcp
ov12.strg.srcle.com:/bricks/brick10/imgnew2a
ov13.strg.srcle.com:/bricks/brick11/imgnew2a ov14.strg.srcle.com:
/bricks/brick12/imgnew2a
on a host with the old volume I did this
mount -t glusterfs
Thanks Strahil
I made a new gluster volume using only gluster CLI. Mounted the old volume
and the new volume. Copied my data from the old volume to the new domain.
Set the volume options like the old domain via the CLI. Tried to make a new
storage domain using the paths to the new servers.
You are definitely reading it wrong.
1. I didn't create a new storage domain ontop this new volume.
2. I used cli
Something like this (in your case it should be 'replica 3'):
gluster volume create newvol replica 3 arbiter 1 ovirt1:/new/brick/path
ovirt2:/new/brick/path
While ovirt can do what you would like it to do concerning a single user
interface, but with what you listed,
you're probably better off with just plain KVM/qemu and using virt-manager
for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't
recommend even
Strahil,
It sounds like you used a "System Managed Volume" for the new storage
domain,is that correct?
Thank You For Your Help !
On Sun, Jun 21, 2020 at 5:40 PM C Williams wrote:
> Strahil,
>
> So you made another oVirt Storage Domain -- then copied the data with cp
> -a from the failed
Strahil,
So you made another oVirt Storage Domain -- then copied the data with cp -a
from the failed volume to the new volume.
At the root of the volume there will be the old domain folder id ex
5fe3ad3f-2d21-404c-832e-4dc7318ca10d
in my case. Did that cause issues with making the new domain
На 21 юни 2020 г. 23:26:32 GMT+03:00, David White via Users
написа:
>I'm reading through all of the documentation at
>https://ovirt.org/documentation/, and am a bit overwhelmed with all of
>the different options for installing oVirt.
>
>My particular use case is that I'm looking for a way to
In my situation I had only the ovirt nodes.
На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
написа:
>Strahil,
>
>So should I make the target volume on 3 bricks which do not have ovirt
>--
>just gluster ? In other words (3) Centos 7 hosts ?
>
>Thank You For Your Help !
>
>On Sun, Jun 21, 2020
I'm reading through all of the documentation at
https://ovirt.org/documentation/, and am a bit overwhelmed with all of the
different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple
physical servers from 1 interface, and be able to
Strahil,
So should I make the target volume on 3 bricks which do not have ovirt --
just gluster ? In other words (3) Centos 7 hosts ?
Thank You For Your Help !
On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
wrote:
> I created a fresh volume (which is not an ovirt sgorage domain), set
>
I created a fresh volume (which is not an ovirt sgorage domain), set the
original storage domain in maintenance and detached it.
Then I 'cp -a ' the data from the old to the new volume. Next, I just added
the new storage domain (the old one was a kind of a 'backup') -
Hello,
Following the previous email, I think I'm hitting an odd problem, not
sure if it's my mistake or an actual bug.
1. Newly deployed 4.4 self-hosted engine on localhost NFS storage on a
single node.
2. Installation failed during the final phase with a non-descriptive
error message [1].
3. Log
On Thu, Jun 18, 2020 at 2:54 PM Yedidyah Bar David wrote:
>
> On Thu, Jun 18, 2020 at 2:37 PM Gilboa Davara wrote:
> >
> > On Wed, Jun 17, 2020 at 12:35 PM Yedidyah Bar David wrote:
> > > > However, when trying to install 4.4 on the test CentOS 8.x (now 8.2
> > > > after yesterday release),
Hello,
I look forward to hearing back about a fix for this !
Thank You All For Your Help !
On Sun, Jun 21, 2020 at 9:47 AM Sahina Bose wrote:
> Thanks Strahil.
>
> Adding Sas and Ravi for their inputs.
>
> On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov
> wrote:
>
>> Hello Sahina, Sandro,
>>
Strahil,
Thanks for the follow up !
How did you copy the data to another volume ?
I have set up another storage domain GLCLNEW1 with a new volume imgnew1 .
How would you copy all of the data from the problematic domain GLCL3 with
volume images3 to GLCLNEW1 and volume imgnew1 and preserve all
Thanks Strahil.
Adding Sas and Ravi for their inputs.
On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov
wrote:
> Hello Sahina, Sandro,
>
> I have noticed that the ACL issue with Gluster (
> https://github.com/gluster/glusterfs/issues/876) is happening to
> multiple oVirt users (so far at least
Hello Sahina, Sandro,
I have noticed that the ACL issue with Gluster
(https://github.com/gluster/glusterfs/issues/876) is happening to multiple
oVirt users (so far at least 5) and I think that this issue needs greater
attention.
Did anyone from the RHHI team managed to reproduce the bug
Sorry to hear that.
I can say that for me 6.5 was working, while 6.6 didn't and I upgraded to
7.0 .
In the ended , I have ended with creating a new fresh volume and physically
copying the data there, then I detached the storage domains and attached to
the new ones (which holded the
Hi,
We have an open RFE for editing the IP address of an existing ISCSI storage
domain
Currently, you should detach and re-add the domain.
On Sun, 21 Jun 2020 at 09:14, wrote:
> Hi!
>
> Do you have suggestions if we can add a new iSCSI target IP address to an
> existing Storage Data Domain?
>
Hi!
Do you have suggestions if we can add a new iSCSI target IP address to an
existing Storage Data Domain?
Earlier, we had an issue where the storage device unexpectedly rebooted. It has
3 IP addresses used for iSCSI connections.
For oVirt, we're connected to that storage device using 1 iSCSI
22 matches
Mail list logo