Upgrading gluster from version 3.12 or 4.1 (included in ovirt 3.x) to 5.3 (in 
ovirt 4.3) seems to cause this due to a bug in the gluster upgrade process. 
It’s an unfortunate side effect fo us upgrading ovirt hyper-converged systems. 
Installing new should be fine, but I’d wait for gluster to get 
https://bugzilla.redhat.com/show_bug.cgi?id=1684385 
<https://bugzilla.redhat.com/show_bug.cgi?id=1684385> included in the version 
ovirt installs before installing a hyper converged cluster. 

I just upgraded my 4.2.8 cluster to 4.3.1, leaving my separate gluster 3.12.15 
servers along, and it worked fine. Except for a different bug screwing up HA 
engine permissions on launch, but it looks like that’s getting fixed on a 
different bug.

Sandro, it’s unfortunate I can’t take more part in testing days, but the 
haven’t been happening at times where I can participate, and a one test test 
isn’t really something i can participate in often. I sometimes try and keep up 
with the RCs on my test cluster, but major version changes wait until I get 
time to consider it, unfortunately. I’m also a little surprised that a major 
upstream issue like that bug hasn’t caused you to issue more warnings, it’s 
something that is going to affect everyone who’s upgrading a converged system. 
Any discussion on why more news wasn’t released about it?

  -Darrell


> On Mar 15, 2019, at 11:50 AM, Jayme <jay...@gmail.com> wrote:
> 
> That is essentially the behaviour that I've seen.  I wonder if perhaps it 
> could be related to the increased heal activity that occurs on the volumes 
> during reboots of nodes after updating.
> 
> On Fri, Mar 15, 2019 at 12:43 PM Ron Jerome <ronj...@gmail.com 
> <mailto:ronj...@gmail.com>> wrote:
> Just FYI, I have observed similar issues where a volume becomes unstable for 
> a period of time after the upgrade, but then seems to settle down after a 
> while.  I've only witnessed this in the 4.3.x versions.  I suspect it's more 
> of a Gluster issue than oVirt, but troubling none the less.  
> 
> On Fri, 15 Mar 2019 at 09:37, Jayme <jay...@gmail.com 
> <mailto:jay...@gmail.com>> wrote:
> Yes that is correct.  I don't know if the upgrade to 4.3.1 itself caused 
> issues or simply related somehow to rebooting all hosts again to apply node 
> updates started causing brick issues for me again. I started having similar 
> brick issues after upgrading to 4.3 originally that seemed to have 
> stabilized, prior to 4.3 I never had a single glusterFS issue or brick 
> offline on 4.2
> 
> On Fri, Mar 15, 2019 at 9:48 AM Sandro Bonazzola <sbona...@redhat.com 
> <mailto:sbona...@redhat.com>> wrote:
> 
> 
> Il giorno ven 15 mar 2019 alle ore 13:38 Jayme <jay...@gmail.com 
> <mailto:jay...@gmail.com>> ha scritto:
> I along with others had GlusterFS issues after 4.3 upgrades, the failed to 
> dispatch handler issue with bricks going down intermittently.  After some 
> time it seemed to have corrected itself (at least in my enviornment) and I 
> hadn't had any brick problems in a while.  I upgraded my three node HCI 
> cluster to 4.3.1 yesterday and again I'm running in to brick issues.  They 
> will all be up running fine then all of a sudden a brick will randomly drop 
> and I have to force start the volume to get it back up. 
> 
> Just to clarify, you already where on oVirt 4.3.0 + Glusterfs 5.3-1 and 
> upgraded to oVirt 4.3.1 + Glusterfs 5.3-2 right?
> 
> 
>  
> 
> Have any of these Gluster issues been addressed in 4.3.2 or any other 
> releases/patches that may be available to help the problem at this time?
> 
> Thanks!
> _______________________________________________
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org 
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> <https://www.ovirt.org/site/privacy-policy/>
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/
>  
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/>
> 
> 
> -- 
> SANDRO BONAZZOLA
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA <https://www.redhat.com/>
> sbona...@redhat.com <mailto:sbona...@redhat.com>   
> 
>  <https://red.ht/sig>
> _______________________________________________
> Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-le...@ovirt.org 
> <mailto:users-le...@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> <https://www.ovirt.org/site/privacy-policy/>
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXHP4R5OXAJQ3SOUEKXYGOKTU43LZV3M/
>  
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXHP4R5OXAJQ3SOUEKXYGOKTU43LZV3M/>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSZ3ROIE6NXIGWHG5KYVE33DBOFUWGJU/

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ECEDGH6RUPC2NNDRLW5KE4LXW47KUTC/

Reply via email to