[ovirt-users] Re: Issues with resizing Gluster Volumes with sharding and lookup-optimize enabled

2021-11-08 Thread Satheesaran Sundaramoorthi
On Mon, Nov 8, 2021 at 4:01 PM Sandro Bonazzola  wrote:

> Il giorno dom 7 nov 2021 alle ore 17:27 Strahil Nikolov <
> hunter86...@yahoo.com> ha scritto:
>
>> Hello All,
>>
>> I recently found a RH solution regards issues with resizing Gluster
>> volumes with lookup-optimize enabled and I think it's worth sharing:
>> https://access.redhat.com/solutions/5896761
>> https://github.com/gluster/glusterfs/issues/1918
>>
>> @Sandro,
>> Do you think it's worth checking and disabling that Gluster option on
>> oVirt upgrade ? It might affect older setups that might still have it
>> enabled.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> Hello Strahil and others,

Yes, it would definitely help if we could disable the gluster volume option
'lookup-optimize' during upgrade.
The challenge here would be how to achieve this, we do not have any hooks
in place for upgrade neither in ovirt nor in gluster.
@Gobinda Das  Do you see any way to achieve this ?

Thanks,
Satheesaran Sundaramoorthi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LKZCDLGWCGWSNGS4CRZUZWK4JZJC5KMF/


[ovirt-users] Re: Import Ge0-Replicated Storage Domain fails

2021-06-07 Thread Satheesaran Sundaramoorthi
On Tue, Jun 8, 2021 at 3:38 AM Simon Scott  wrote:

> Anyone have any comments please?
>
> Hi Simon,

With the latest version of gluster, when geo-replication is in setup, the
slave volume will be set to read-only.
Before you could import that storage domain, make sure to disable read-only
on the slave volume.
# gluster volume set  features.read-only off

I do not think the documentation details available upstream.
@Ritesh Chikatwar  Could you check, whether the
documentation is available upstream.

Thanks,
Satheesaran Sundaramoorthi

> On 28 May 2021, at 08:01, Simon Scott  wrote:
>
> 
> Hi All,
>
> Does anyone have any further input on this please?
>
> Kind regards
>
> Simon...
>
> On 25 May 2021, at 09:26, Ritesh Chikatwar  wrote:
>
> 
> Sas, maybe you have have some thoughts on this
>
> On Tue, May 25, 2021 at 1:19 PM Vojtech Juranek 
> wrote:
>
>> (CC Pavel, who recently worked on DR, maybe he will have some thoughts)
>>
>> On Monday, 24 May 2021 17:56:56 CEST si...@justconnect.ie wrote:
>> > Hi All,
>> >
>> > I have 2 independent Hyperconverged Sites/Data Centers.
>> >
>> > Site A has a GlusterFS Replica 3 + Arbiter Volume that is Storage Domain
>> > data2
>>  This Volume is Geo-Replicated to a Replica 3 + Arbiter Volume at
>> > Site B called data2_bdt
>> > I have simulated a DR event and now want to import the Ge0-Replicated
>> volume
>> > data2_bdt as a Storage Domain on Site B. Once imported I need to import
>> the
>> > VMs on this volume to run in Site B.
>>
>> > The Geo-Replication now works perfectly (thanks Strahil) but I haven't
>> been
>> > able to import the Storage Domain.
>>
>> > Please can someone point me in the right direction or documentation on
>> how
>> > this can be achieved.
>>
>> > Kind Regards
>> >
>> > Shimme...
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> > oVirt Code of Conduct:
>> > https://www.ovirt.org/community/about/community-guidelines/ List
>> Archives:
>> >
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQCTZS6YTKMME
>> > 2EHBXJEGUM2WDNSYXEC/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6J63RH74YKX7OCK5RCR5IQOUDSF7GG7/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JMOT5WWGBCAR7RW7L5H3KY6FDK7STDTH/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/46G24KFGGYFLQIISZTP5GD2E5UYSNMQP/


[ovirt-users] Re: Fwd: Unable to Upgrade

2019-10-01 Thread Satheesaran Sundaramoorthi
On Tue, Oct 1, 2019 at 8:09 PM Jayme  wrote:

> Hello,
>
> I am running oVirt 4.3.6 and glusterd service and peers on all HCI nodes
> are connected and working properly after updating.  I'm not sure if I
> understand what your question or issue is.  It definitely should be safe to
> update your oVirt HCI to 4.3.6.  GlusterFS is able to update from 5.6 to
> 6.5 one host at a time, the other hosts are still able to connect after one
> has been updated to Gluster 6x
>
> Thanks Jayme for the update. Probably Akshita should have some other
problem with the infra.

-- sas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CO7KLRGGLMDD4RW5KMTIHDRH2FXCZYTT/


[ovirt-users] Re: Fwd: Unable to Upgrade

2019-10-01 Thread Satheesaran Sundaramoorthi
On Tue, Oct 1, 2019 at 6:00 PM Jayme  wrote:

> In an oVirt HCI gluster configuration you should be able to take down at
> least one host at a time.  The procedure I use to upgrade my oVirt HCI
> cluster goes something like this:
>
> 1. Upgrade the oVirt engine setup packages.  Upgrade the engine.  Yum
> update and reboot.
> 2. Place the first host in maintenance mode and upgrade the host.
> Reboot.  Wait until the host is active and gluster bricks are fully healed.
> 3. Place the next host in maintenance and perform the same steps until all
> hosts are upgraded.
>
> *From: *"Akshita Jain" 
>>> *Subject: **Unable to Upgrade*
>>> *Date: *1 October 2019 at 11:12:58 CEST
>>> *To: *in...@ovirt.org
>>>
>>> After upgrading oVirt 4.3.4 to 4.3.6, the gluster is also upgrading from
>>> 5.6 to 6.5. But as soon as it upgrades gluster peer status shows
>>> disconnected.
>>> What is the correct method to upgrade oVirt with gluster HCI environment?
>>>
>>>
Hello Akshita,

Jayme's steps are perfect enough for the update.
If you see post updating to oVirt 4.3.6, your peers are in disconnected
state,
could you confirm that glusterd service is up and running on all HCI nodes
( # systemctl status glusterd ).

Meanwhile, I can also try for the quick update from gluster-5.6 to 6.5 and
update the results here.

-- Satheesaran S ( sas )
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6ZWJHJO6ADLTDFUEXEWRTUPCCM52R62T/


[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-10-01 Thread Satheesaran Sundaramoorthi
On Tue, Oct 1, 2019 at 11:27 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Hi all,
>
> Sorry for asking again :/
>
> Is there any consensus on not using --emulate512 anymore while creating
> VDO volumes on Gluster?
> Since this parameter can not be changed once the volume is created and we
> are nearing production setup. I would really like to have an official
> advice on this.
>
> Best,
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
> Hello Guillaume Pavese,
If you are not using --emulate512 for VDO volume, then VDO volume will be
created
as 4K Native volume (with 4K block size).

There are couple of things that bothers here:
1. 4K Native device support requires fixes at QEMU that will be part of
CentOS 7.7.2 ( not yet available )
2. 4K Native support with VDO volumes on Gluster is not yet validated
thoroughly.

Based on the above items, it would be better you have emulate512=on or delay
your production setup ( if possible, till above both items are addressed )
to make use of 4K VDO volume.

@Sahina Bose  Do you have any other suggestions ?

-- Satheesaran S (sas)

>
> On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 27 set 2019 alle ore 05:21 Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> ha scritto:
>>
>>> I see that oVirt 4.3.6 finally has 4k domain support.
>>>
>>> - Would that mean that VDO enabled Gluster domains will be created
>>> without the --emulate512 workaround?
>>> - If the wizard to create the Gluster volumes has not yet removed that
>>> parameter, is it safe to edit & remove it manually before creation?
>>> - Should we expect performance increase by using the native 4k block
>>> size of VDO?
>>>
>>>
>>
>> +Nir Soffer  +Sundaramoorthi, Satheesaran
>>  +Gobinda Das  can you answer
>> here?
>>
>>
>>
>>> Thanks
>>>
>>> Guillaume Pavese
>>> Ingénieur Système et Réseau
>>> Interactiv-Group
>>>
>>>
>>> On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola 
>>> wrote:
>>>
 The oVirt Project is pleased to announce the general availability of
 oVirt 4.3.6 as of September 26th, 2019.



 This update is the sixth in a series of stabilization updates to the
 4.3 series.



 This release is available now on x86_64 architecture for:

 * Red Hat Enterprise Linux 7.7 or later (but < 8)

 * CentOS Linux (or similar) 7.7 or later (but < 8)



 This release supports Hypervisor Hosts on x86_64 and ppc64le
 architectures for:

 * Red Hat Enterprise Linux 7.7 or later (but < 8)

 * CentOS Linux (or similar) 7.7 or later (but < 8)

 * oVirt Node 4.3 (available for x86_64 only)



 Due to Fedora 28 being now at end of life this release is missing
 experimental tech preview for x86_64 and s390x architectures for Fedora 28.

 We are working on Fedora 29 and 30 support and we may re-introduce
 experimental support for Fedora in next release.



 See the release notes [1] for installation / upgrade instructions and a
 list of new features and bugs fixed.



 Notes:

 - oVirt Appliance is already available

 - oVirt Node is already available[2]

 oVirt Node and Appliance have been updated including:

 - oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/

 - Wildfly 17.0.1:
 https://wildfly.org/news/2019/07/07/WildFly-1701-Released/

 - Latest CentOS 7.7 updates including:

-

Release for CentOS Linux 7 (1908) on the x86_64 Architecture

 
-

CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update

 

-

CEBA-2019:2023 CentOS 7 efivar BugFix Update

 
-

CEBA-2019:2614 CentOS 7 firewalld BugFix Update

 
-

CEBA-2019:2227 CentOS 7 grubby BugFix Update

 
-

CESA-2019:2258 Moderate CentOS 7 http-parser Security Update

 
-

CESA-2019:2600 Important CentOS 7 kernel Security Update

 
-

CEBA-2019:2599 CentOS 7 krb5 BugFix Update

 
-

CEBA-2019:2358 CentOS 7 libguestfs BugFix Update

 

[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-09-27 Thread Satheesaran Sundaramoorthi
On Fri, Sep 27, 2019 at 4:13 PM Sandro Bonazzola 
wrote:

> Il giorno ven 27 set 2019 alle ore 11:31 Rik Theys <
> rik.th...@esat.kuleuven.be> ha scritto:
>
>> Hi,
>>
>> After upgrading to 4.3.6, my storage domain can no longer be activated,
>> rendering my data center useless.
>>
> Hello Rik,

Did you get the exact engine log, why the storage domain can't be
activated. ?
Could you send the engine.log ?

>
>> My storage domain is local storage on a filesystem backed by VDO/LVM. It
>> seems 4.3.6 has added support for 4k storage.
>>
> Do you have gluster in this combination of VDO/LVM ?
There are some QEMU fixes that is required for 4K storage to work with
oVirt.
These fixes will be part of CentOS 7.7 batch update 2


> My VDO does not have the 'emulate512' flag set.
>>
> This means your VDO volume would have been configured as a 4K' native device
by default.
Was it this way, before updating to oVirt 4.3.6 ?
I believe so, because you can't change the block size of VDO on the fly.
So my guess is that you are running fortunate with 4K VDO volume

>
>> I've tried downgrading all packages on the host to the previous versions
>> (with ioprocess 1.2), but this does not seem to make any difference. Should
>> I also downgrade the engine to 4.3.5 to get this to work again. I expected
>> the downgrade of the host to be sufficient.
>>
>> As an alternative I guess I could enable the emulate512 flag on VDO but I
>> can not find how to do this on an existing VDO volume. Is this possible?
>>
> As state above, this is not possible.
Once VDO volume is created, its block size can't be changed dynamically.

-- Satheesaran S ( sas )
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BGMYU5BOTZWGLJRZMZHQX3PN72VF6O74/


[ovirt-users] Re: [ovirt-announce] Re: Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Satheesaran Sundaramoorthi
On Fri, May 17, 2019 at 1:12 AM Nir Soffer  wrote:

> On Thu, May 16, 2019 at 10:12 PM Darrell Budic 
> wrote:
>
>> On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:
>>
>>
>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic 
>> wrote:
>>
>>> I tried adding a new storage domain on my hyper converged test cluster
>>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>>> volume fine, but it’s not able to add the gluster storage domain (as either
>>> a managed gluster volume or directly entering values). The created gluster
>>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>>
>>> ...
>>
>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>>> file system doesn't supportdirect IO (fileSD:110)
>>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
>>> createStorageDomain error=Storage Domain target is unsupported: ()
>>> from=:::10.100.90.5,44732, flow_id=31d993dd,
>>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>>
>>
>> The direct I/O check has failed.
>>
>>
>> So something is wrong in the files system.
>>
>> To confirm, you can try to do:
>>
>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>
>> This will probably fail with:
>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>
>> If it succeeds, but oVirt fail to connect to this domain, file a bug and
>> we will investigate.
>>
>> Nir
>>
>>
>> Yep, it fails as expected. Just to check, it is working on pre-existing
>> volumes, so I poked around at gluster settings for the new volume. It has
>> network.remote-dio=off set on the new volume, but enabled on old volumes.
>> After enabling it, I’m able to run the dd test:
>>
>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>> volume set: success
>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
>> oflag=direct
>> 1+0 records in
>> 1+0 records out
>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>
>> I’m also able to add the storage domain in ovirt now.
>>
>> I see network.remote-dio=enable is part of the gluster virt group, so
>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>> for storage?
>>
>
> I'm not sure who is responsible for changing these settings. oVirt always
> required directio, and we
> never had to change anything in gluster.
>
> Sahina, maybe gluster changed the defaults?
>
> Darrell, please file a bug, probably for RHHI.
>

Hello Darrell & Nir,

Do we have a bug available now for this issue ?
I just need to make sure performance.strict-o-direct=on is enabled on that
volume.


Satheesaran Sundaramoorthi

Senior Quality Engineer, RHHI-V QE

Red Hat APAC <https://www.redhat.com>

<https://red.ht/sig>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B64SWJKMFWMHZUOFRHIRSKI7JSQ2XWON/


[ovirt-users] VMs going in to non-responding state

2017-09-06 Thread Satheesaran Sundaramoorthi
Hi All,

I have created a converged setup with cluster having both virt and gluster
capability. There are three hosts in this cluster, and this cluster also
has enabled 'native access to gluster domain' which enables VM to use
libgfapi access mechanism.

With this setup, I see VMs created landing up in non-reponding state after
sometime.

I have raised bug[1] for this issue.
Request for help with this issue

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1488863

Advance Thanks.

-- Satheesaran S ( sas )
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt split brain resolution

2017-06-27 Thread Satheesaran Sundaramoorthi
On Sat, Jun 24, 2017 at 3:17 PM, Abi Askushi 
wrote:

> Hi all,
>
> For the records, I had to remove manually the conflicting directory and ts
> respective gfid from the arbiter volume:
>
>  getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>
> That gave me the gfid: 0x277c9caa9dce4a17a2a93775357befd5
>
> Then cd .glusterfs/27/7c
>
> rm -rf 277c9caa-9dce-4a17-a2a9-3775357befd5 (or move it out of there)
>
> Triggerred heal: gluster volume heal engine
>
> Then all ok:
>
> gluster volume heal engine info
> Brick gluster0:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster1:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Brick gluster2:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
> Thanx.
>

​Hi Abi,

What is the volume type of 'engine' volume ?
Could you also provide the output of 'gluster volume info engine' to get to
the
closer look at the problem

​-- sas​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users