Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread ovirt

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.


Pardon my lack of knowledge (I'm an oVirt/Gluster newbie).
I assume the SSD-to-single-Gluster brick can be done using gdeploy in 
oVirt?


On 2017-06-11 01:20, Barak Korren wrote:

On 11 June 2017 at 11:08, Yaniv Kaul  wrote:



I will install the o/s for each node on a SATADOM.
Since each node will have 6x SSD for gluster storage.
Should this be software RAID, hardware RAID or no RAID?


I'd reckon that you should prefer HW RAID on software RAID, and some 
RAID on
no RAID at all, but it really depends on your budget, performance, and 
your

availability requirements.



Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.

Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.

So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Re-adding oVirt host.

2017-06-11 Thread Marcin Jessa
Hi.

I deleted /etc/vdsm/vdsm.id, rebooted the box but still could not re-add the 
host. 
Still the same error message: "Error while executing action: Cannot add Host. 
The Host name is already in use, please choose a unique name and try again.”

What I did to fix it:
# su postgres
bash-4.2$ psql -s engine
psql (9.2.18)
engine=# select vds_id from vds_static where host_name = ‘my.ip.add.ress’;

vds_id
--
 acadd185-5673-4eb9-8ad3-b5c0390caeb0
(1 row)

engine=# delete from vds_statistics where vds_id = 
'acadd185-5673-4eb9-8ad3-b5c0390caeb0’;
engine=# delete from vds_dynamic where vds_id = 
'acadd185-5673-4eb9-8ad3-b5c0390caeb0’;
engine=# delete from vds_static where vds_id = 
'acadd185-5673-4eb9-8ad3-b5c0390caeb0';


Finally I ran:
# uuidgen > /etc/vdsm/vdsm.id

Thanks for pointing me to the right direction Arman.

Marcin.

> On 11 Jun 2017, at 16:15, Arman Khalatyan  wrote:
> 
> you should delete vdsmid from /etc/vdsmd or so... checkout the forum 
> somewhere I mentioned it.
> 
> Am 11.06.2017 2:52 nachm. schrieb "Marcin M. Jessa"  >:
> Hi guys.
> 
> I have a two node setup. One server running on CentOS [1] and one server with 
> oVirt node [2].
> I added local storage to my oVirt host but I forgot the storage I chose was 
> already in use so it failed.
> I then got a host entry with local storage which was shown as not configured.
> I then removed that local storage host but then it completely disappeared 
> from my setup.
> Then I tried to add it again but oVirt said that host is already defined. I 
> tried different name with the same IP but it also failed saying that that IP 
> is already defined. Is there a way to re-add that previously defined host?
> How can I bring it back?
> 
> [1]: oVirt Engine Version: 4.1.2.2-1.el7.centos
> [2]: oVirt Node 4.1.2
> 
> 
> Marcin Jessa.
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] chrony or ntp ?

2017-06-11 Thread Christopher Cox

On 06/11/2017 04:19 AM, Fabrice Bacchella wrote:



Le 10 juin 2017 à 22:21, Michal Skrivanek  a écrit :


On 09 Jun 2017, at 15:48, Fabrice Bacchella  wrote:


People might be suprised. I'm currently trying to understand what chrony did to 
my ntpd setup, it look like it killed it and puppet has hard time to 
reconfigure it.

And as it's not a 'ovirt update' but instead vdsm update seems to happen more 
frequently, some people might forget to read release notes and be disappointed.


We do not configure anything. Just pull in dependency. You're free to
disable the service as a common admin task. As long as you replace it
with other time synchronization solution


Yes, that's I've done, but beware of user complain about broken ntp service 
because their specially crafted ntpd configuration now lying dead. I detected 
it because my puppet setup tried to uninstall chrony and failed. What about 
other users ? Does the default chrony settings always works, for every one ?



Since you mentioned puppet, here's out puppet pp and template erb we use, hope 
it help.  IMHO, ntp has problems that chrony doesn't have:


chrony/manifests/init.pp:

# This class is really only for CentOS 7 or higher.
#
class chrony (
  $stratumweight  = 0,
  $driftfile  = '/var/lib/chrony/drift',
  $keyfile= '/etc/chrony.keys',
  $keyfile_commandkey = 1,
  $generatecommandkey = true,
  $logdir = '/var/log/chrony',
  $noclientlog= true,
  $logchange  = '0.5',
  $makestep_enable= true,
  $makestep_threshold = 10,
  $makestep_update= -1,
  $bindcmdaddress = '127.0.0.1',
  $servers= ['ntp1.example.com', 'ntp2.example.com'],
  $iburst_enable  = true,
  $rtcsync_enable = false,) {
  if $operatingsystem in ['CentOS', 'RedHat'] and ($::operatingsystemmajrelease 
+ 0) >= 7 {

ensure_packages(['chrony'])
# Red Hat, CentOS don't readily have ability to change location of conf
#  file.
$conf_file = '/etc/chrony.conf'

service { 'chronyd':
  ensure  => 'running',
  enable  => true,
  require => Package['chrony'],
}

file { $conf_file:
  ensure  => present,
  group   => 'root',
  mode=> '644',
  owner   => 'root',
  content => template('chrony/chrony_conf.erb'),
  notify  => Service['chronyd'],
  require => Package['chrony'],
}
  } else {
notify { 'chrony only supported in CentOS/RHEL 7 or greater': }

exec { '/bin/false': }
  }
}

chrony/templates/chrony_conf.erb

<% @servers.flatten.each do |server| -%>
server <%= server %><% if @iburst_enable == true -%> iburst<% end -%>

<% end -%>

<% if @stratumweight -%>
stratumweight <%= @stratumweight %>
<% end -%>
<% if @driftfile -%>
driftfile <%= @driftfile %>
<% end -%>
<% if @makestep_enable == true -%>
makestep <%= @makestep_threshold %> <%= @makestep_update %>
<% end -%>
<% if @rtcsync_enable == true -%>
rtcsync
<% end -%>
<% if @bindcmdaddress -%>
bindcmdaddress <%= @bindcmdaddress %>
<% end -%>
<% if @keyfile -%>
keyfile <%= @keyfile %>
<%   if @keyfile_commandkey -%>
commandkey <%= @keyfile_commandkey %>
<%   else -%>
commandkey 0
<%   end -%>
<%   if @generatecommandkey == true -%>
generatecommandkey
<%   end -%>
<% end -%>
<% if @noclientlog -%>
noclientlog
<% end -%>
<% if @logchange -%>
logchange <%= @logchange %>
<% end -%>
<% if @logdir -%>
logdir <%= @logdir %>
<% end -%>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread Karli Sjöberg
Den 11 juni 2017 15:43 skrev ov...@fateknollogee.com:>> I think just defining each SSD as a single Gluster brick may provide>> the best performance for VMs, but my understanding of this is>> theoretical, so I leave it to the Gluster people to provide further >> insight.>> Barak, very interesting, I had never thought of doing it this way but your idea does make sense.I assume Gluster is able to tolerate drive failures in the array?I'm also interested in hearing what the Gluster folks think about your approach?Here's an interesting article I recently read about how to set it up, using ZFS as RAID:http://45drives.blogspot.se/2016/11/an-introduction-to-clustering-how-to.html?m=1/KOn 2017-06-11 01:20, Barak Korren wrote:> On 11 June 2017 at 11:08, Yaniv Kaul  wrote:>> >>> I will install the o/s for each node on a SATADOM.>>> Since each node will have 6x SSD for gluster storage.>>> Should this be software RAID, hardware RAID or no RAID?>> >> I'd reckon that you should prefer HW RAID on software RAID, and some >> RAID on>> no RAID at all, but it really depends on your budget, performance, and >> your>> availability requirements.>> > > Not sure that is the best advice, given the use of Gluster+SSDs for> hosting individual VMs.> > Typical software or hardware RAID systems are designed for use with> spinning disks, and may not yield any better performance on SSDs. RAID> is also not very good when I/O is highly scattered as it probably is> when running multiple different VMs.> > So we are left with using RAID solely for availability. I think> Gluster may already provide that, so adding additional software or> hardware layers for RAID may just degrade performance without> providing any tangible benefits.> > I think just defining each SSD as a single Gluster brick may provide> the best performance for VMs, but my understanding of this is> theoretical, so I leave it to the Gluster people to provide further> insight.___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread ovirt

Mahdi,

Can you share some more detail on your hardware?
How many total SSDs?
Have you had any drive failures?
How do you monitor for failed drives?
Was it a problem replacing failed drives?

On 2017-06-11 02:21, Mahdi Adnan wrote:

Hi,

In our setup, we used each SSD as a standalone brick "no RAID" and
created distributed replica with sharding.

Also, we are NOT managing Gluster from ovirt.

--

Respectfully
MAHDI A. MAHDI

-

FROM: users-boun...@ovirt.org  on behalf of
Barak Korren 
SENT: Sunday, June 11, 2017 11:20:45 AM
TO: Yaniv Kaul
CC: ov...@fateknollogee.com; Ovirt Users
SUBJECT: Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster
storage best practice

On 11 June 2017 at 11:08, Yaniv Kaul  wrote:



I will install the o/s for each node on a SATADOM.
Since each node will have 6x SSD for gluster storage.
Should this be software RAID, hardware RAID or no RAID?


I'd reckon that you should prefer HW RAID on software RAID, and some

RAID on

no RAID at all, but it really depends on your budget, performance,

and your

availability requirements.



Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.

Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.

So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.

--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread ovirt

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further 
insight.




Barak, very interesting, I had never thought of doing it this way but 
your idea does make sense.


I assume Gluster is able to tolerate drive failures in the array?
I'm also interested in hearing what the Gluster folks think about your 
approach?




On 2017-06-11 01:20, Barak Korren wrote:

On 11 June 2017 at 11:08, Yaniv Kaul  wrote:



I will install the o/s for each node on a SATADOM.
Since each node will have 6x SSD for gluster storage.
Should this be software RAID, hardware RAID or no RAID?


I'd reckon that you should prefer HW RAID on software RAID, and some 
RAID on
no RAID at all, but it really depends on your budget, performance, and 
your

availability requirements.



Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.

Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.

So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread ovirt



Is that a hyper-converged setup of both oVirt and Gluster?
We usually do it in batches of 3 nodes.


Yes, it is for a HC setup of both oVirt & Gluster.


..it really depends on your budget,
performance, and your availability requirements.


I would like to enhance the performance.


Makes sense (I could not see the rear bays - might have missed them).
Will you be able to put some SSDs there for caching?

This the correct part # 
https://www.supermicro.com/products/chassis/2U/216/SC216BE26-R920LPB

In all the oVirt docs/videos, no one mentioned using SSD for caching?
Therefore I was not planning on caching.
I planned to install oVirt node on the rear SSD's



On 2017-06-11 01:08, Yaniv Kaul wrote:

On Sat, Jun 10, 2017 at 1:43 PM,  wrote:


Martin,

Looking to test oVirt on real hardware (aka no nesting)

Scenario # 1:
1x Supermicro 2027TR-HTRF 2U 4 node server


Is that a hyper-converged setup of both oVirt and Gluster?
We usually do it in batches of 3 nodes.


I will install the o/s for each node on a SATADOM.
Since each node will have 6x SSD for gluster storage.
Should this be software RAID, hardware RAID or no RAID?


I'd reckon that you should prefer HW RAID on software RAID, and some
RAID on no RAID at all, but it really depends on your budget,
performance, and your availability requirements.


Scenario # 2:
3x SuperMicro SC216E16-R1200LPB 2U server
Each server has 24x 2.5" bays (front) + 2x 2.5" bays (rear)
I will install the o/s on the drives using the rear bays (maybe RAID
1?)


Makes sense (I could not see the rear bays - might have missed them).
Will you be able to put some SSDs there for caching?


For Gluster, we will use the 24 front bays.
Should this be software RAID, hardware RAID or no RAID?


Same answer is above.
Y.


Thanks
Femi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1]




Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Re-adding oVirt host.

2017-06-11 Thread Marcin M. Jessa

Hi guys.

I have a two node setup. One server running on CentOS [1] and one server 
with oVirt node [2].
I added local storage to my oVirt host but I forgot the storage I chose 
was already in use so it failed.
I then got a host entry with local storage which was shown as not 
configured.
I then removed that local storage host but then it completely 
disappeared from my setup.
Then I tried to add it again but oVirt said that host is already 
defined. I tried different name with the same IP but it also failed 
saying that that IP is already defined. Is there a way to re-add that 
previously defined host?

How can I bring it back?

[1]: oVirt Engine Version: 4.1.2.2-1.el7.centos
[2]: oVirt Node 4.1.2


Marcin Jessa.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] chrony or ntp ?

2017-06-11 Thread Fabrice Bacchella

> Le 10 juin 2017 à 22:21, Michal Skrivanek  a écrit :
> 
>> On 09 Jun 2017, at 15:48, Fabrice Bacchella  
>> wrote:
>> 
>> 
>> People might be suprised. I'm currently trying to understand what chrony did 
>> to my ntpd setup, it look like it killed it and puppet has hard time to 
>> reconfigure it.
>> 
>> And as it's not a 'ovirt update' but instead vdsm update seems to happen 
>> more frequently, some people might forget to read release notes and be 
>> disappointed.
> 
> We do not configure anything. Just pull in dependency. You're free to
> disable the service as a common admin task. As long as you replace it
> with other time synchronization solution

Yes, that's I've done, but beware of user complain about broken ntp service 
because their specially crafted ntpd configuration now lying dead. I detected 
it because my puppet setup tried to uninstall chrony and failed. What about 
other users ? Does the default chrony settings always works, for every one ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread Barak Korren
On 11 June 2017 at 11:08, Yaniv Kaul  wrote:
>
>> I will install the o/s for each node on a SATADOM.
>> Since each node will have 6x SSD for gluster storage.
>> Should this be software RAID, hardware RAID or no RAID?
>
> I'd reckon that you should prefer HW RAID on software RAID, and some RAID on
> no RAID at all, but it really depends on your budget, performance, and your
> availability requirements.
>

Not sure that is the best advice, given the use of Gluster+SSDs for
hosting individual VMs.

Typical software or hardware RAID systems are designed for use with
spinning disks, and may not yield any better performance on SSDs. RAID
is also not very good when I/O is highly scattered as it probably is
when running multiple different VMs.

So we are left with using RAID solely for availability. I think
Gluster may already provide that, so adding additional software or
hardware layers for RAID may just degrade performance without
providing any tangible benefits.

I think just defining each SSD as a single Gluster brick may provide
the best performance for VMs, but my understanding of this is
theoretical, so I leave it to the Gluster people to provide further
insight.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] trouble when creating VM snapshots including memory

2017-06-11 Thread Yaniv Kaul
On Fri, Jun 9, 2017 at 3:39 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> hi,
>
> i'm having trouble creating VM snapshots that include memory in my oVirt
> 4.1 test environment. when i do this the VM gets paused and shortly
> (20-30s) afterwards i'm seeing messages in engine.log about both iSCSI
> storage domains (master storage domain and data storage where VM resides)
> experiencing high latency. this quickly worsens from the engines view: VM
> is unresponsive, Host is unresponsive, engine wants to fence the host
> (impossible because it's the only host in the test cluster). in the end
> there is an EngineException
>
> EngineException: org.ovirt.engine.core.vdsbroke
> r.vdsbroker.VDSNetworkException: VDSGenericException:
> VDSNetworkException: Message timeout which can be caused by communication
> issues (Failed with error VDS_NETWORK_ERROR and code 5022)
>
> the snapshot fails and is left in an inconsistent state. the situation has
> to be resolved manually with unlock_entity.sh and maybe lvm commands. this
> happened twice in exactly the same manner. VM snapshots without memory for
> this VM are not a problem.
>
> VM guest OS is CentOS7 installed from one of the ovirt-image-repository
> images. it has the oVirt guest agent running.
>
> what could be wrong?
>
> this is a test environment where lots of parameters aren't optimal but i
> never had problems like this before, nothing concerning network latency.
> iSCSI is on a FreeNAS box. CPU, RAM, ethernet (10GBit for storage) on all
> hosts involved (engine hosted externally, oVirt Node, storage) should be OK
> by far.
>

Are you sure iSCSI traffic is going over the 10gb interfaces?
If it doesn't, it might choke the mgmt interface.
Regardless, how is the performance of the storage? I don't expect it to
require too much, but saving the memory might require some storage
performance. Perhaps there's a bottleneck there?
Y.


>
> it looks like some obvious configuration botch or performance bottleneck
> to me. can it be linked to the network roles (management and migration
> network are on a 1 GBit link)?
>
> i'm still new to this, not a lot of KVM experience, too. maybe someone
> recognizes the culprit...
>
> thx
> matthias
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware for Hyperconverged oVirt: Gluster storage best practice

2017-06-11 Thread Yaniv Kaul
On Sat, Jun 10, 2017 at 1:43 PM,  wrote:

> Martin,
>
> Looking to test oVirt on real hardware (aka no nesting)
>
> Scenario # 1:
> 1x Supermicro 2027TR-HTRF 2U 4 node server
>

Is that a hyper-converged setup of both oVirt and Gluster?
We usually do it in batches of 3 nodes.

I will install the o/s for each node on a SATADOM.
> Since each node will have 6x SSD for gluster storage.
> Should this be software RAID, hardware RAID or no RAID?
>

I'd reckon that you should prefer HW RAID on software RAID, and some RAID
on no RAID at all, but it really depends on your budget, performance, and
your availability requirements.


>
> Scenario # 2:
> 3x SuperMicro SC216E16-R1200LPB 2U server
> Each server has 24x 2.5" bays (front) + 2x 2.5" bays (rear)
> I will install the o/s on the drives using the rear bays (maybe RAID 1?)
>

Makes sense (I could not see the rear bays - might have missed them).
Will you be able to put some SSDs there for caching?


> For Gluster, we will use the 24 front bays.
> Should this be software RAID, hardware RAID or no RAID?
>

Same answer is above.
Y.


> Thanks
> Femi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt client developpement

2017-06-11 Thread Yaniv Kaul
On Fri, Jun 9, 2017 at 10:30 AM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

>
> > Le 9 juin 2017 à 16:25, Luca 'remix_tj' Lorenzetto <
> lorenzetto.l...@gmail.com> a écrit :
> >
> > On Fri, Jun 9, 2017 at 4:19 PM, Fabrice Bacchella
> >  wrote:
> >> For my ovirt cli, I would like to have unit tests. But there is nothing
> to test in standalone mode, I need a running ovirt with a database in a
> known state.
> >>
> >> Is there some where a docker images with a toy setup, or a mock ovirt
> engine that can be downloaded and used for that ?
> >
> > Maybe you can run lago
> > http://lago.readthedocs.io/en/stable/README.html and setup an ovirt
> > env on the fly?
>
> That's not a answer to my question. I can always build one manually. I
> know how to build a VM/contenaire from that, but I will still need to fill
> it with fake data, and needs to update it for every release of ovirt.
>
> With a prebuild system, provided by oVirt people, I could also run it on
> release candidate and help them find bugs.
>

ovirt-system-tests, on top of Lago is what we use all the time for system
tests. It takes few minutes to set up a system, you can save-restore a
running system, update it easily, etc. It's quite fully featured and easily
extendable.
Y.



> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't access ovirt-engine webpage

2017-06-11 Thread Yedidyah Bar David
On Sat, Jun 10, 2017 at 6:43 PM, Thomas Wakefield  wrote:
> As a follow-up for future reference.  Re-running engine-setup with the same 
> config file solved all my issues.  I can now long into the website, and run 
> VM’s again.

"same config file" - the one passed to --config-append=?

Interesting. It might be worthwhile to compare the setup logs created by
these runs. Perhaps you can open a BZ bug, describe your flow, and attach
the conf and log files?

Thanks for the report!

>
>
>> On Jun 5, 2017, at 5:12 AM, Thomas Wakefield  wrote:
>>
>>
>>> On Jun 5, 2017, at 2:24 AM, Yedidyah Bar David  wrote:
>>>
>>> On Mon, Jun 5, 2017 at 4:44 AM, Thomas Wakefield  wrote:
 After a reboot I can’t access the management ovirt-engine webpage anymore.


 server.log line that looks bad:

 2017-06-04 21:35:55,652-04 ERROR 
 [org.jboss.as.controller.management-operation] (DeploymentScanner-threads 
 - 2) WFLYCTL0190: Step handler 
 org.jboss.as.server.deployment.DeploymentHandlerUtil$5@61b686ed for 
 operation {"operation" => "undeploy","address" => [("deployment" => 
 "engine.ear")],"owner" => [("subsystem" => 
 "deployment-scanner"),("scanner" => "default")]} at address [("deployment" 
 => "engine.ear")] failed handling operation rollback -- 
 java.lang.IllegalStateException: WFLYCTL0345: Timeout after 5 seconds 
 waiting for existing service service 
 jboss.deployment.unit."engine.ear".contents to be removed so a new 
 instance can be installed.: java.lang.IllegalStateException: WFLYCTL0345: 
 Timeout after 5 seconds waiting for existing service service 
 jboss.deployment.unit."engine.ear".contents to be removed so a new 
 instance can be installed.
   at 
 org.jboss.as.controller.OperationContextImpl$ContextServiceBuilder.install(OperationContextImpl.java:2107)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.server.deployment.PathContentServitor.addService(PathContentServitor.java:50)
  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.server.deployment.DeploymentHandlerUtil.doDeploy(DeploymentHandlerUtil.java:165)
  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.server.deployment.DeploymentHandlerUtil$5$1.handleResult(DeploymentHandlerUtil.java:333)
  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext$Step.invokeResultHandler(AbstractOperationContext.java:1384)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1366)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:1328)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:1311)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext$Step.access$300(AbstractOperationContext.java:1185)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext.executeResultHandlerPhase(AbstractOperationContext.java:767)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:644)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1329)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:400)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:222)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:756)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at 
 org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:750)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
   at java.security.AccessController.doPrivileged(Native Method) 
 [rt.jar:1.8.0_131]
   at 
 org.jboss.as.controller.ModelControllerImpl$3$1.run(ModelControllerImpl.java:750)
  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]