Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On 31 July 2016 at 18:26, Kenneth Binghamwrote: > Roy, would I do that in the Cluster tab, with New button, and then in host > configurator select Hosted Engine sidebar? I noticed this option existed, > but the RHEV docs and developer blogs I've been referencing specify the > 'hosted-engine --deploy' method. > > on 4.0 this is the supported way of doing that. If fixes and prevents many troubles and more than that, its simply doing that in 1 place. The doc will be updated for that if not already. Also please refer to this ovirt blog post [1] [1] http://www.ovirt.org/blog/2016/07/Manage-Your-Hosted-Engine-Hosts-Deployment/ > On Sun, Jul 31, 2016 at 3:04 AM Roy Golan wrote: > >> On 30 July 2016 at 02:48, Kenneth Bingham wrote: >> >>> Aw crap. I did exactly the same thing and this could explain a lot of >>> the issues I've been pulling out my beard over. Every time I did >>> 'hosted-engine --deploy' on the RHEV-M|NODE host I entered the FQDN of >>> *that* host, not the first host, as the origin of the Gluster FS volume >>> because at the time I didn't realize that >>> a. the manager would key deduplication on the URI of the volume >>> b. that the volume would be mounted on FUSE, not NFS, and therefore no >>> single point of entry is created by using the FQDN of the first host >>> because the GFS client will persist connections with all peers >>> >>> >> If you ever want to add an hosted engine host to your setup please do >> that from UI and not from CLI. That will prevent all this confusion. >> >> >> >>> On Fri, Jul 29, 2016 at 6:08 AM Simone Tiraboschi >>> wrote: >>> On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho wrote: > On 29/7/2559 15:50, Simone Tiraboschi wrote: >> >> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho wrote: >>> >>> On 28/7/2559 15:54, Simone Tiraboschi wrote: >>> >>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho >>> wrote: On 21/7/2559 16:53, Simone Tiraboschi wrote: On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho < we...@forest.go.th> wrote: > Can I just follow > > http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine > until step 3 and do everything else via GUI? Yes, absolutely. Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted the services but the result is still the same. Kinda lost now. What should I do next? >>> Can you please attach ovirt-ha-agent logs? >>> >>> >>> Yes, here are the logs: >>> https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh >> >> Thanks Wee, >> your issue is here: >> MainThread::ERROR::2016-07-17 >> >> 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) >> The hosted-engine storage domain is already mounted on >> >> '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th: _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' >> with a path that is not supported anymore: the right path should be >> >> '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th: _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. >> >> Did you manually tried to avoid the issue of a single entry point for >> the gluster FS volume using host01.ovirt.forest.go.th: _hosted__engine >> and host02.ovirt.forest.go.th:_hosted__engine there? >> This could cause a lot of confusion since the code could not detect >> that the storage domain is the same and you can end with it mounted >> twice into different locations and a lot of issues. >> The correct solution of that issue was this one: >> https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 >> >> Now, to have it fixed on your env you have to hack a bit. >> First step, you have to edit >> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine >> hosts to ensure that the storage field always point to the same entry >> point (host01 for instance) >> Then on each host you can add something like: >> >> mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:h ost03.ovirt.forest.go.th ,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log >> >> Then check the representation of your storage connection in the table >> storage_server_connections of the engine DB and make sure that >> connection refers to the entry point you used in hosted-engine.conf on >> all your hosts, you have lastly to set the value of mount_options also >> here. > >
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On Mon, Aug 1, 2016 at 3:54 AM, Wee Sritipphowrote: > On 29/7/2559 17:07, Simone Tiraboschi wrote: >> >> On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho >> wrote: >>> >>> On 29/7/2559 15:50, Simone Tiraboschi wrote: On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho wrote: > > On 28/7/2559 15:54, Simone Tiraboschi wrote: > > On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho > wrote: >> >> On 21/7/2559 16:53, Simone Tiraboschi wrote: >> >> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho >> wrote: >> >>> Can I just follow >>> >>> >>> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine >>> until step 3 and do everything else via GUI? >> >> Yes, absolutely. >> >> >> Hi, I upgrade a host (host02) via GUI and now its score is 0. >> Restarted >> the services but the result is still the same. Kinda lost now. What >> should I >> do next? >> > Can you please attach ovirt-ha-agent logs? > > > Yes, here are the logs: > https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh Thanks Wee, your issue is here: MainThread::ERROR::2016-07-17 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) The hosted-engine storage domain is already mounted on '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' with a path that is not supported anymore: the right path should be '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. Did you manually tried to avoid the issue of a single entry point for the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine and host02.ovirt.forest.go.th:_hosted__engine there? This could cause a lot of confusion since the code could not detect that the storage domain is the same and you can end with it mounted twice into different locations and a lot of issues. The correct solution of that issue was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 Now, to have it fixed on your env you have to hack a bit. First step, you have to edit /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine hosts to ensure that the storage field always point to the same entry point (host01 for instance) Then on each host you can add something like: mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:host03.ovirt.forest.go.th,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log Then check the representation of your storage connection in the table storage_server_connections of the engine DB and make sure that connection refers to the entry point you used in hosted-engine.conf on all your hosts, you have lastly to set the value of mount_options also here. >>> >>> Weird. The configuration in all hosts are already referring to host01. >> >> but for sure you have a connection pointing to host02 somewhere, did >> you try to manually deploy from CLI connecting the gluster volume on >> host02? > > If I recall correctly, yes. Ok, so please reboot your host before trying again to make sure that every reference get cleaned. >>> Also, in the storage_server_connections table: >>> >>> engine=> SELECT * FROM storage_server_connections; >>>id | connection| >>> user_name | password | iqn | port | portal | storage_type | mount_options >>> | >>> vfs_type >>> | nfs_version | nfs_timeo | nfs_retrans >>> >>> --+--+---+--+-+--++--+---+-- >>> -+-+---+- >>> bd78d299-c8ff-4251-8aab-432ce6443ae8 | >>> host01.ovirt.forest.go.th:/hosted_engine | | | | | 1 >>> |7 | | glusterfs >>> | | | >>> (1 row) >>> >>> Please tune also the value of network.ping-timeout for your glusterFS volume to avoid this: https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17 >>> >>> >>> -- >>> Wee >>> > > -- > Wee > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On 29/7/2559 17:07, Simone Tiraboschi wrote: On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritipphowrote: On 29/7/2559 15:50, Simone Tiraboschi wrote: On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho wrote: On 28/7/2559 15:54, Simone Tiraboschi wrote: On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho wrote: On 21/7/2559 16:53, Simone Tiraboschi wrote: On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho wrote: Can I just follow http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine until step 3 and do everything else via GUI? Yes, absolutely. Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted the services but the result is still the same. Kinda lost now. What should I do next? Can you please attach ovirt-ha-agent logs? Yes, here are the logs: https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh Thanks Wee, your issue is here: MainThread::ERROR::2016-07-17 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) The hosted-engine storage domain is already mounted on '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' with a path that is not supported anymore: the right path should be '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. Did you manually tried to avoid the issue of a single entry point for the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine and host02.ovirt.forest.go.th:_hosted__engine there? This could cause a lot of confusion since the code could not detect that the storage domain is the same and you can end with it mounted twice into different locations and a lot of issues. The correct solution of that issue was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 Now, to have it fixed on your env you have to hack a bit. First step, you have to edit /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine hosts to ensure that the storage field always point to the same entry point (host01 for instance) Then on each host you can add something like: mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:host03.ovirt.forest.go.th,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log Then check the representation of your storage connection in the table storage_server_connections of the engine DB and make sure that connection refers to the entry point you used in hosted-engine.conf on all your hosts, you have lastly to set the value of mount_options also here. Weird. The configuration in all hosts are already referring to host01. but for sure you have a connection pointing to host02 somewhere, did you try to manually deploy from CLI connecting the gluster volume on host02? If I recall correctly, yes. Also, in the storage_server_connections table: engine=> SELECT * FROM storage_server_connections; id | connection| user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --+--+---+--+-+--++--+---+-- -+-+---+- bd78d299-c8ff-4251-8aab-432ce6443ae8 | host01.ovirt.forest.go.th:/hosted_engine | | | | | 1 |7 | | glusterfs | | | (1 row) Please tune also the value of network.ping-timeout for your glusterFS volume to avoid this: https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17 -- Wee -- Wee ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
Roy, would I do that in the Cluster tab, with New button, and then in host configurator select Hosted Engine sidebar? I noticed this option existed, but the RHEV docs and developer blogs I've been referencing specify the 'hosted-engine --deploy' method. On Sun, Jul 31, 2016 at 3:04 AM Roy Golanwrote: > On 30 July 2016 at 02:48, Kenneth Bingham wrote: > >> Aw crap. I did exactly the same thing and this could explain a lot of the >> issues I've been pulling out my beard over. Every time I did 'hosted-engine >> --deploy' on the RHEV-M|NODE host I entered the FQDN of *that* host, not >> the first host, as the origin of the Gluster FS volume because at the time >> I didn't realize that >> a. the manager would key deduplication on the URI of the volume >> b. that the volume would be mounted on FUSE, not NFS, and therefore no >> single point of entry is created by using the FQDN of the first host >> because the GFS client will persist connections with all peers >> >> > If you ever want to add an hosted engine host to your setup please do that > from UI and not from CLI. That will prevent all this confusion. > > > >> On Fri, Jul 29, 2016 at 6:08 AM Simone Tiraboschi >> wrote: >> >>> On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho >>> wrote: >>> > On 29/7/2559 15:50, Simone Tiraboschi wrote: >>> >> >>> >> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho >>> wrote: >>> >>> >>> >>> On 28/7/2559 15:54, Simone Tiraboschi wrote: >>> >>> >>> >>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho >>> >>> wrote: >>> >>> On 21/7/2559 16:53, Simone Tiraboschi wrote: >>> >>> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho >> > >>> wrote: >>> >>> > Can I just follow >>> > >>> > >>> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine >>> > until step 3 and do everything else via GUI? >>> >>> Yes, absolutely. >>> >>> >>> Hi, I upgrade a host (host02) via GUI and now its score is 0. >>> Restarted >>> the services but the result is still the same. Kinda lost now. What >>> should I >>> do next? >>> >>> >>> Can you please attach ovirt-ha-agent logs? >>> >>> >>> >>> >>> >>> Yes, here are the logs: >>> >>> https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh >>> >> >>> >> Thanks Wee, >>> >> your issue is here: >>> >> MainThread::ERROR::2016-07-17 >>> >> >>> >> >>> 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) >>> >> The hosted-engine storage domain is already mounted on >>> >> >>> >> '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th: >>> _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' >>> >> with a path that is not supported anymore: the right path should be >>> >> >>> >> '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th: >>> _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. >>> >> >>> >> Did you manually tried to avoid the issue of a single entry point for >>> >> the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine >>> >> and host02.ovirt.forest.go.th:_hosted__engine there? >>> >> This could cause a lot of confusion since the code could not detect >>> >> that the storage domain is the same and you can end with it mounted >>> >> twice into different locations and a lot of issues. >>> >> The correct solution of that issue was this one: >>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 >>> >> >>> >> Now, to have it fixed on your env you have to hack a bit. >>> >> First step, you have to edit >>> >> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine >>> >> hosts to ensure that the storage field always point to the same entry >>> >> point (host01 for instance) >>> >> Then on each host you can add something like: >>> >> >>> >> mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:h >>> ost03.ovirt.forest.go.th >>> ,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log >>> >> >>> >> Then check the representation of your storage connection in the table >>> >> storage_server_connections of the engine DB and make sure that >>> >> connection refers to the entry point you used in hosted-engine.conf on >>> >> all your hosts, you have lastly to set the value of mount_options also >>> >> here. >>> > >>> > Weird. The configuration in all hosts are already referring to host01. >>> >>> but for sure you have a connection pointing to host02 somewhere, did >>> you try to manually deploy from CLI connecting the gluster volume on >>> host02? >>> >>> > Also, in the storage_server_connections table: >>> > >>> > engine=> SELECT * FROM storage_server_connections; >>> > id | connection| >>> > user_name | password | iqn | port | portal | storage_type | >>> mount_options | >>> >
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On 30 July 2016 at 02:48, Kenneth Binghamwrote: > Aw crap. I did exactly the same thing and this could explain a lot of the > issues I've been pulling out my beard over. Every time I did 'hosted-engine > --deploy' on the RHEV-M|NODE host I entered the FQDN of *that* host, not > the first host, as the origin of the Gluster FS volume because at the time > I didn't realize that > a. the manager would key deduplication on the URI of the volume > b. that the volume would be mounted on FUSE, not NFS, and therefore no > single point of entry is created by using the FQDN of the first host > because the GFS client will persist connections with all peers > > If you ever want to add an hosted engine host to your setup please do that from UI and not from CLI. That will prevent all this confusion. > On Fri, Jul 29, 2016 at 6:08 AM Simone Tiraboschi > wrote: > >> On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho >> wrote: >> > On 29/7/2559 15:50, Simone Tiraboschi wrote: >> >> >> >> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho >> wrote: >> >>> >> >>> On 28/7/2559 15:54, Simone Tiraboschi wrote: >> >>> >> >>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho >> >>> wrote: >> >> On 21/7/2559 16:53, Simone Tiraboschi wrote: >> >> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho >> wrote: >> >> > Can I just follow >> > >> > >> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine >> > until step 3 and do everything else via GUI? >> >> Yes, absolutely. >> >> >> Hi, I upgrade a host (host02) via GUI and now its score is 0. >> Restarted >> the services but the result is still the same. Kinda lost now. What >> should I >> do next? >> >> >>> Can you please attach ovirt-ha-agent logs? >> >>> >> >>> >> >>> Yes, here are the logs: >> >>> https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh >> >> >> >> Thanks Wee, >> >> your issue is here: >> >> MainThread::ERROR::2016-07-17 >> >> >> >> >> 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) >> >> The hosted-engine storage domain is already mounted on >> >> >> >> '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th: >> _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' >> >> with a path that is not supported anymore: the right path should be >> >> >> >> '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th: >> _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. >> >> >> >> Did you manually tried to avoid the issue of a single entry point for >> >> the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine >> >> and host02.ovirt.forest.go.th:_hosted__engine there? >> >> This could cause a lot of confusion since the code could not detect >> >> that the storage domain is the same and you can end with it mounted >> >> twice into different locations and a lot of issues. >> >> The correct solution of that issue was this one: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 >> >> >> >> Now, to have it fixed on your env you have to hack a bit. >> >> First step, you have to edit >> >> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine >> >> hosts to ensure that the storage field always point to the same entry >> >> point (host01 for instance) >> >> Then on each host you can add something like: >> >> >> >> mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:h >> ost03.ovirt.forest.go.th >> ,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log >> >> >> >> Then check the representation of your storage connection in the table >> >> storage_server_connections of the engine DB and make sure that >> >> connection refers to the entry point you used in hosted-engine.conf on >> >> all your hosts, you have lastly to set the value of mount_options also >> >> here. >> > >> > Weird. The configuration in all hosts are already referring to host01. >> >> but for sure you have a connection pointing to host02 somewhere, did >> you try to manually deploy from CLI connecting the gluster volume on >> host02? >> >> > Also, in the storage_server_connections table: >> > >> > engine=> SELECT * FROM storage_server_connections; >> > id | connection| >> > user_name | password | iqn | port | portal | storage_type | >> mount_options | >> > vfs_type >> > | nfs_version | nfs_timeo | nfs_retrans >> > >> --+--+---+--+-+--++--+---+-- >> > -+-+---+- >> > bd78d299-c8ff-4251-8aab-432ce6443ae8 | >> > host01.ovirt.forest.go.th:/hosted_engine | | | | | 1 >> > |7 | | glusterfs >>
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
Aw crap. I did exactly the same thing and this could explain a lot of the issues I've been pulling out my beard over. Every time I did 'hosted-engine --deploy' on the RHEV-M|NODE host I entered the FQDN of *that* host, not the first host, as the origin of the Gluster FS volume because at the time I didn't realize that a. the manager would key deduplication on the URI of the volume b. that the volume would be mounted on FUSE, not NFS, and therefore no single point of entry is created by using the FQDN of the first host because the GFS client will persist connections with all peers On Fri, Jul 29, 2016 at 6:08 AM Simone Tiraboschiwrote: > On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritippho > wrote: > > On 29/7/2559 15:50, Simone Tiraboschi wrote: > >> > >> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho > wrote: > >>> > >>> On 28/7/2559 15:54, Simone Tiraboschi wrote: > >>> > >>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho > >>> wrote: > > On 21/7/2559 16:53, Simone Tiraboschi wrote: > > On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho > wrote: > > > Can I just follow > > > > > http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine > > until step 3 and do everything else via GUI? > > Yes, absolutely. > > > Hi, I upgrade a host (host02) via GUI and now its score is 0. > Restarted > the services but the result is still the same. Kinda lost now. What > should I > do next? > > >>> Can you please attach ovirt-ha-agent logs? > >>> > >>> > >>> Yes, here are the logs: > >>> https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh > >> > >> Thanks Wee, > >> your issue is here: > >> MainThread::ERROR::2016-07-17 > >> > >> > 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) > >> The hosted-engine storage domain is already mounted on > >> > >> '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th: > _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' > >> with a path that is not supported anymore: the right path should be > >> > >> '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th: > _hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. > >> > >> Did you manually tried to avoid the issue of a single entry point for > >> the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine > >> and host02.ovirt.forest.go.th:_hosted__engine there? > >> This could cause a lot of confusion since the code could not detect > >> that the storage domain is the same and you can end with it mounted > >> twice into different locations and a lot of issues. > >> The correct solution of that issue was this one: > >> https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 > >> > >> Now, to have it fixed on your env you have to hack a bit. > >> First step, you have to edit > >> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine > >> hosts to ensure that the storage field always point to the same entry > >> point (host01 for instance) > >> Then on each host you can add something like: > >> > >> mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:h > ost03.ovirt.forest.go.th > ,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log > >> > >> Then check the representation of your storage connection in the table > >> storage_server_connections of the engine DB and make sure that > >> connection refers to the entry point you used in hosted-engine.conf on > >> all your hosts, you have lastly to set the value of mount_options also > >> here. > > > > Weird. The configuration in all hosts are already referring to host01. > > but for sure you have a connection pointing to host02 somewhere, did > you try to manually deploy from CLI connecting the gluster volume on > host02? > > > Also, in the storage_server_connections table: > > > > engine=> SELECT * FROM storage_server_connections; > > id | connection| > > user_name | password | iqn | port | portal | storage_type | > mount_options | > > vfs_type > > | nfs_version | nfs_timeo | nfs_retrans > > > --+--+---+--+-+--++--+---+-- > > -+-+---+- > > bd78d299-c8ff-4251-8aab-432ce6443ae8 | > > host01.ovirt.forest.go.th:/hosted_engine | | | | | 1 > > |7 | | glusterfs > > | | | > > (1 row) > > > > > >> > >> Please tune also the value of network.ping-timeout for your glusterFS > >> volume to avoid this: > >> https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17 > > > > > > -- > > Wee > > > ___ > Users mailing list >
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On Fri, Jul 29, 2016 at 11:35 AM, Wee Sritipphowrote: > On 29/7/2559 15:50, Simone Tiraboschi wrote: >> >> On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritippho wrote: >>> >>> On 28/7/2559 15:54, Simone Tiraboschi wrote: >>> >>> On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho >>> wrote: On 21/7/2559 16:53, Simone Tiraboschi wrote: On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho wrote: > Can I just follow > > http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine > until step 3 and do everything else via GUI? Yes, absolutely. Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted the services but the result is still the same. Kinda lost now. What should I do next? >>> Can you please attach ovirt-ha-agent logs? >>> >>> >>> Yes, here are the logs: >>> https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh >> >> Thanks Wee, >> your issue is here: >> MainThread::ERROR::2016-07-17 >> >> 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) >> The hosted-engine storage domain is already mounted on >> >> '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' >> with a path that is not supported anymore: the right path should be >> >> '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. >> >> Did you manually tried to avoid the issue of a single entry point for >> the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine >> and host02.ovirt.forest.go.th:_hosted__engine there? >> This could cause a lot of confusion since the code could not detect >> that the storage domain is the same and you can end with it mounted >> twice into different locations and a lot of issues. >> The correct solution of that issue was this one: >> https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 >> >> Now, to have it fixed on your env you have to hack a bit. >> First step, you have to edit >> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine >> hosts to ensure that the storage field always point to the same entry >> point (host01 for instance) >> Then on each host you can add something like: >> >> mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:host03.ovirt.forest.go.th,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log >> >> Then check the representation of your storage connection in the table >> storage_server_connections of the engine DB and make sure that >> connection refers to the entry point you used in hosted-engine.conf on >> all your hosts, you have lastly to set the value of mount_options also >> here. > > Weird. The configuration in all hosts are already referring to host01. but for sure you have a connection pointing to host02 somewhere, did you try to manually deploy from CLI connecting the gluster volume on host02? > Also, in the storage_server_connections table: > > engine=> SELECT * FROM storage_server_connections; > id | connection| > user_name | password | iqn | port | portal | storage_type | mount_options | > vfs_type > | nfs_version | nfs_timeo | nfs_retrans > --+--+---+--+-+--++--+---+-- > -+-+---+- > bd78d299-c8ff-4251-8aab-432ce6443ae8 | > host01.ovirt.forest.go.th:/hosted_engine | | | | | 1 > |7 | | glusterfs > | | | > (1 row) > > >> >> Please tune also the value of network.ping-timeout for your glusterFS >> volume to avoid this: >> https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17 > > > -- > Wee > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On 29/7/2559 15:50, Simone Tiraboschi wrote: On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritipphowrote: On 28/7/2559 15:54, Simone Tiraboschi wrote: On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho wrote: On 21/7/2559 16:53, Simone Tiraboschi wrote: On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho wrote: Can I just follow http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine until step 3 and do everything else via GUI? Yes, absolutely. Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted the services but the result is still the same. Kinda lost now. What should I do next? Can you please attach ovirt-ha-agent logs? Yes, here are the logs: https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh Thanks Wee, your issue is here: MainThread::ERROR::2016-07-17 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) The hosted-engine storage domain is already mounted on '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' with a path that is not supported anymore: the right path should be '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. Did you manually tried to avoid the issue of a single entry point for the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine and host02.ovirt.forest.go.th:_hosted__engine there? This could cause a lot of confusion since the code could not detect that the storage domain is the same and you can end with it mounted twice into different locations and a lot of issues. The correct solution of that issue was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 Now, to have it fixed on your env you have to hack a bit. First step, you have to edit /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine hosts to ensure that the storage field always point to the same entry point (host01 for instance) Then on each host you can add something like: mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:host03.ovirt.forest.go.th,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log Then check the representation of your storage connection in the table storage_server_connections of the engine DB and make sure that connection refers to the entry point you used in hosted-engine.conf on all your hosts, you have lastly to set the value of mount_options also here. Weird. The configuration in all hosts are already referring to host01. Also, in the storage_server_connections table: engine=> SELECT * FROM storage_server_connections; id | connection| user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --+--+---+--+-+--++--+---+-- -+-+---+- bd78d299-c8ff-4251-8aab-432ce6443ae8 | host01.ovirt.forest.go.th:/hosted_engine | | | | | 1 |7 | | glusterfs | | | (1 row) Please tune also the value of network.ping-timeout for your glusterFS volume to avoid this: https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17 -- Wee ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On Fri, Jul 29, 2016 at 6:31 AM, Wee Sritipphowrote: > On 28/7/2559 15:54, Simone Tiraboschi wrote: > > On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho wrote: >> >> On 21/7/2559 16:53, Simone Tiraboschi wrote: >> >> On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho >> wrote: >> >>> >>> Can I just follow >>> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine >>> until step 3 and do everything else via GUI? >> >> Yes, absolutely. >> >> >> Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted >> the services but the result is still the same. Kinda lost now. What should I >> do next? >> > > Can you please attach ovirt-ha-agent logs? > > > Yes, here are the logs: > https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh Thanks Wee, your issue is here: MainThread::ERROR::2016-07-17 14:32:45,586::storage_server::143::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(_validate_pre_connected_path) The hosted-engine storage domain is already mounted on '/rhev/data-center/mnt/glusterSD/host02.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4' with a path that is not supported anymore: the right path should be '/rhev/data-center/mnt/glusterSD/host01.ovirt.forest.go.th:_hosted__engine/639e689c-8493-479b-a6eb-cc92b6fc4cf4'. Did you manually tried to avoid the issue of a single entry point for the gluster FS volume using host01.ovirt.forest.go.th:_hosted__engine and host02.ovirt.forest.go.th:_hosted__engine there? This could cause a lot of confusion since the code could not detect that the storage domain is the same and you can end with it mounted twice into different locations and a lot of issues. The correct solution of that issue was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1298693#c20 Now, to have it fixed on your env you have to hack a bit. First step, you have to edit /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine hosts to ensure that the storage field always point to the same entry point (host01 for instance) Then on each host you can add something like: mnt_options=backupvolfile-server=host02.ovirt.forest.go.th:host03.ovirt.forest.go.th,fetch-attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log Then check the representation of your storage connection in the table storage_server_connections of the engine DB and make sure that connection refers to the entry point you used in hosted-engine.conf on all your hosts, you have lastly to set the value of mount_options also here. Please tune also the value of network.ping-timeout for your glusterFS volume to avoid this: https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17 > -- > Wee ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On 28/7/2559 15:54, Simone Tiraboschi wrote: On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritippho> wrote: On 21/7/2559 16:53, Simone Tiraboschi wrote: On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho > wrote: Can I just follow http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine until step 3 and do everything else via GUI? Yes, absolutely. Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted the services but the result is still the same. Kinda lost now. What should I do next? Can you please attach ovirt-ha-agent logs? Yes, here are the logs: https://app.box.com/s/b4urjty8dsuj98n3ywygpk3oh5o7pbsh -- Wee ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On Thu, Jul 28, 2016 at 10:41 AM, Wee Sritipphowrote: > On 21/7/2559 16:53, Simone Tiraboschi wrote: > > On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho > wrote: > > >> Can I just follow >> http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine >> until step 3 and do everything else via GUI? >> > Yes, absolutely. > > > Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted > the services but the result is still the same. Kinda lost now. What should > I do next? > > Can you please attach ovirt-ha-agent logs? > [root@host02 ~]# service vdsmd restart > Redirecting to /bin/systemctl restart vdsmd.service > [root@host02 ~]# systemctl restart ovirt-ha-broker && systemctl restart > ovirt-ha-agent > [root@host02 ~]# systemctl status ovirt-ha-broker > ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability > Communications Broker >Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; > enabled; vendor preset: disabled) >Active: active (running) since Thu 2016-07-28 15:09:38 ICT; 20min ago > Main PID: 4614 (ovirt-ha-broker) >CGroup: /system.slice/ovirt-ha-broker.service >└─4614 /usr/bin/python > /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon > > Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > established > Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > closed > Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > established > Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > closed > Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > established > Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > closed > Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > established > Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > closed > Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > established > Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection > closed > [root@host02 ~]# systemctl status ovirt-ha-agent > ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability > Monitoring Agent >Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; > enabled; vendor preset: disabled) >Active: active (running) since Thu 2016-07-28 15:28:34 ICT; 1min 19s ago > Main PID: 11488 (ovirt-ha-agent) >CGroup: /system.slice/ovirt-ha-agent.service >└─11488 /usr/bin/python > /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon > > Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: > /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: > DeprecationWarning: Dispatcher.pend...instead. > Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending > = getattr(dispatcher, 'pending', lambda: 0) > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: > /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: > DeprecationWarning: Dispatcher.pend...instead. > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending > = getattr(dispatcher, 'pending', lambda: 0) > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: > /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: > DeprecationWarning: Dispatcher.pend...instead. > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending > = getattr(dispatcher, 'pending', lambda: 0) > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: > /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: > DeprecationWarning: Dispatcher.pend...instead. > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending > = getattr(dispatcher, 'pending', lambda: 0) > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: > ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: > 'Attempt to call functi...rt agent > Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: > ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Error: 'Attempt to call > function: teardownIma...rt agent > Hint: Some lines were ellipsized, use -l to show in full. > [root@host01 ~]# hosted-engine --vm-status > > > --== Host
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On 21/7/2559 16:53, Simone Tiraboschi wrote: On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritippho> wrote: Can I just follow http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine until step 3 and do everything else via GUI? Yes, absolutely. Hi, I upgrade a host (host02) via GUI and now its score is 0. Restarted the services but the result is still the same. Kinda lost now. What should I do next? [root@host02 ~]# service vdsmd restart Redirecting to /bin/systemctl restart vdsmd.service [root@host02 ~]# systemctl restart ovirt-ha-broker && systemctl restart ovirt-ha-agent [root@host02 ~]# systemctl status ovirt-ha-broker ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications Broker Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2016-07-28 15:09:38 ICT; 20min ago Main PID: 4614 (ovirt-ha-broker) CGroup: /system.slice/ovirt-ha-broker.service └─4614 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established Jul 28 15:29:35 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established Jul 28 15:29:48 host02.ovirt.forest.go.th ovirt-ha-broker[4614]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed [root@host02 ~]# systemctl status ovirt-ha-agent ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2016-07-28 15:28:34 ICT; 1min 19s ago Main PID: 11488 (ovirt-ha-agent) CGroup: /system.slice/ovirt-ha-agent.service └─11488 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pend...instead. Jul 28 15:29:52 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending = getattr(dispatcher, 'pending', lambda: 0) Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pend...instead. Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending = getattr(dispatcher, 'pending', lambda: 0) Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pend...instead. Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending = getattr(dispatcher, 'pending', lambda: 0) Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pend...instead. Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: pending = getattr(dispatcher, 'pending', lambda: 0) Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: 'Attempt to call functi...rt agent Jul 28 15:29:53 host02.ovirt.forest.go.th ovirt-ha-agent[11488]: ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Error: 'Attempt to call function: teardownIma...rt agent Hint: Some lines were ellipsized, use -l to show in full. [root@host01 ~]# hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : host01.ovirt.forest.go.th Host ID: 1 Engine status : {"health": "good", "vm": "up",
Re: [ovirt-users] Safe to upgrade HE hosts from GUI?
On Thu, Jul 21, 2016 at 11:43 AM, Wee Sritipphowrote: > Hi, > > I used to follow > http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine > when upgrading Hosted Engine (HE) but always fail to make the engine VM > migrate to the fresh upgraded host as described in step 7. Furthermore, the > update available icon never disappeared from the GUI. > Yes, you are right on that: it will happen only upgrading from 3.5, where maximum score for an hosted-engine hosts was 2400 points, to 3.6 where the maximum score is 3400. On 3.6.z upgrades all the hosts are already at 3400 points and so the VM will not migrate fro that. > So I though using the GUI might be better for an amateur like me. > > Can I just follow > http://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine > until step 3 and do everything else via GUI? > Yes, absolutely. > Thank you, > > -- > Wee > > > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users