Re: [ovirt-users] Building oVirt engine on Debian

2017-06-05 Thread Leni Kadali Mutungi
Setup was successful. Attached is the message I received. I didn't
mind the firewalld bits since I don't have that installed. However all
the ovn-* commands didn't work. I tried locating their equivalents,
thinking that they could be in the ovirt_engine folder or something
along those lines. The `sed` and `keytool` commands worked. However I
don't have OpenSwitch installed so I'll add that if it's necessary.

-- 
- Warm regards
Leni Kadali Mutungi


ovirt_config
Description: Binary data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage Domain

2017-06-05 Thread Langley, Robert
FYI: My miss. Firewall port for VDSM needed to be added to my zone(s).
Yay! The host is now in GREEN status within the Default Cluster.

Sent using OWA for iPhone

From: Langley, Robert
Sent: Monday, June 5, 2017 5:16:51 PM
To: Sandro Bonazzola; Sahina Bose
Cc: Simone Tiraboschi; Nir Soffer; Allon Mureinik; Tal Nisan; users
Subject: RE: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage 
Domain

I have good news. With the IOProcess connection, I was suspecting maybe it 
would be something with the system (BIOS related).
The Dell PE R730 was at BIOS version 2.2.5 and there was an update in April to 
version 2.4.3 (with another inbetween these two). There were a couple of fixes 
between the version the server had and the latest, which had me wondering if 
they may be related to the IOProcess issue I was experiencing.
After applying this update the Hosted Engine deployment went further.
It could explain why I could not add this host in a previous installation.

I’ll start another thread for this next bump I’m running into, unless I can 
figure it out. Has to do with the VDSM host. Engine says it cannot communicate 
with this host its running on. So, the setup timed out waiting for the VDSM 
host to start.

From: Sandro Bonazzola [mailto:sbona...@redhat.com]
Sent: Monday, June 5, 2017 9:00 AM
To: Langley, Robert ; Sahina Bose 

Cc: Simone Tiraboschi ; Nir Soffer ; 
Allon Mureinik ; Tal Nisan ; users 

Subject: Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage 
Domain



On Fri, Jun 2, 2017 at 6:16 PM, Langley, Robert 
> wrote:
Any progress?

Nir? Allon? Tal?


One thing that has been going through my mind is whether oVirt allows a 
GlusterFS storage domain to work with multiple CPU types?

Sahina?

The two dedicated GlusterFS storage servers are AMD Opteron. And the third 
server for the replica 3, which I am hoping I can also use as an oVirt host is 
a new Intel Xeon (Dell PE R830). I know GlusterFS allows for mixed hardware, 
but I’m not sure about when oVirt manages GlusterFS, if mixed hardware and 
mixed use has been accounted for?

From: Simone Tiraboschi [mailto:stira...@redhat.com]
Sent: Wednesday, May 31, 2017 8:41 AM
To: Langley, Robert 
>
Cc: Sandro Bonazzola >; Nir 
Soffer >; Allon Mureinik 
>; Tal Nisan 
>; users 
>

Subject: Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage 
Domain

It seams something related to IOProcess connection.

2017-05-25 20:54:40,362-0700 INFO  (jsonrpc/4) [IOProcessClient] Starting 
client ioprocess-3 (__init__:330)
2017-05-25 20:54:40,370-0700 INFO  (ioprocess/31239) [IOProcess] Starting 
ioprocess (__init__:452)
2017-05-25 20:54:40,407-0700 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='89dd17d2-8a38-4825-9ba2-f231f1aff9f5') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2581, in createStorageDomain
storageType, domVersion)
  File "/usr/share/vdsm/storage/nfsSD.py", line 87, in create
remotePath, storageType, version)
  File "/usr/share/vdsm/storage/fileSD.py", line 421, in _prepareMetadata
procPool.fileUtils.createdir(metadataDir, 0o775)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", line 
166, in createdir
self._iop.mkdir(tmpPath, mode)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 524, in 
mkdir
self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 466, in 
_sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 61] No data available
2017-05-25 20:54:40,409-0700 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='89dd17d2-8a38-4825-9ba2-f231f1aff9f5') aborting: Task is aborted: 
u'[Errno 61] No data available' - code 100 (task:1175)
2017-05-25 20:54:40,409-0700 ERROR (jsonrpc/4) [storage.Dispatcher] [Errno 61] 
No data available (dispatcher:80)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 72, in wrapper
result = ctask.prepare(func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 105, in wrapper
return m(self, *a, **kw)
  File "/usr/share/vdsm/storage/task.py", line 1183, in prepare
raise self.error
OSError: [Errno 61] No data available

Re: [ovirt-users] Docker images for oVirt engine

2017-06-05 Thread Jason Brooks
>
> Once the images are built, you can deploy the complete oVirt application
> running this:
>
>   make deploy

This is awesome, I've been playing w/ this on a centos host running
origin 1.5.1.

I've gotten stopped when I've tried to approve the host in the engine,
that fails with an error message like: "Host vdsc-ds-615nl
installation failed. Failed to get the session.." Any idea what what's
going wrong there?

Jason


>
> Would be great to have you testing this, giving feedback and reporting
> any issues you find. But be aware that this is in a very early stage of
> development and should be considered experimental.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage Domain

2017-06-05 Thread Langley, Robert
I have good news. With the IOProcess connection, I was suspecting maybe it 
would be something with the system (BIOS related).
The Dell PE R730 was at BIOS version 2.2.5 and there was an update in April to 
version 2.4.3 (with another inbetween these two). There were a couple of fixes 
between the version the server had and the latest, which had me wondering if 
they may be related to the IOProcess issue I was experiencing.
After applying this update the Hosted Engine deployment went further.
It could explain why I could not add this host in a previous installation.

I’ll start another thread for this next bump I’m running into, unless I can 
figure it out. Has to do with the VDSM host. Engine says it cannot communicate 
with this host its running on. So, the setup timed out waiting for the VDSM 
host to start.

From: Sandro Bonazzola [mailto:sbona...@redhat.com]
Sent: Monday, June 5, 2017 9:00 AM
To: Langley, Robert ; Sahina Bose 

Cc: Simone Tiraboschi ; Nir Soffer ; 
Allon Mureinik ; Tal Nisan ; users 

Subject: Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage 
Domain



On Fri, Jun 2, 2017 at 6:16 PM, Langley, Robert 
> wrote:
Any progress?

Nir? Allon? Tal?


One thing that has been going through my mind is whether oVirt allows a 
GlusterFS storage domain to work with multiple CPU types?

Sahina?

The two dedicated GlusterFS storage servers are AMD Opteron. And the third 
server for the replica 3, which I am hoping I can also use as an oVirt host is 
a new Intel Xeon (Dell PE R830). I know GlusterFS allows for mixed hardware, 
but I’m not sure about when oVirt manages GlusterFS, if mixed hardware and 
mixed use has been accounted for?

From: Simone Tiraboschi [mailto:stira...@redhat.com]
Sent: Wednesday, May 31, 2017 8:41 AM
To: Langley, Robert 
>
Cc: Sandro Bonazzola >; Nir 
Soffer >; Allon Mureinik 
>; Tal Nisan 
>; users 
>

Subject: Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage 
Domain

It seams something related to IOProcess connection.

2017-05-25 20:54:40,362-0700 INFO  (jsonrpc/4) [IOProcessClient] Starting 
client ioprocess-3 (__init__:330)
2017-05-25 20:54:40,370-0700 INFO  (ioprocess/31239) [IOProcess] Starting 
ioprocess (__init__:452)
2017-05-25 20:54:40,407-0700 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='89dd17d2-8a38-4825-9ba2-f231f1aff9f5') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2581, in createStorageDomain
storageType, domVersion)
  File "/usr/share/vdsm/storage/nfsSD.py", line 87, in create
remotePath, storageType, version)
  File "/usr/share/vdsm/storage/fileSD.py", line 421, in _prepareMetadata
procPool.fileUtils.createdir(metadataDir, 0o775)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", line 
166, in createdir
self._iop.mkdir(tmpPath, mode)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 524, in 
mkdir
self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 466, in 
_sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 61] No data available
2017-05-25 20:54:40,409-0700 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='89dd17d2-8a38-4825-9ba2-f231f1aff9f5') aborting: Task is aborted: 
u'[Errno 61] No data available' - code 100 (task:1175)
2017-05-25 20:54:40,409-0700 ERROR (jsonrpc/4) [storage.Dispatcher] [Errno 61] 
No data available (dispatcher:80)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 72, in wrapper
result = ctask.prepare(func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 105, in wrapper
return m(self, *a, **kw)
  File "/usr/share/vdsm/storage/task.py", line 1183, in prepare
raise self.error
OSError: [Errno 61] No data available
2017-05-25 20:54:40,410-0700 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.create failed (error 351) in 0.24 seconds (__init__:533)

Nir, any hint?


On Wed, May 31, 2017 at 5:24 PM, Langley, Robert 
> wrote:
SOSReport attached, with md5. Thank you


From: Sandro Bonazzola [mailto:sbona...@redhat.com]
Sent: Wednesday, May 31, 2017 12:00 

Re: [ovirt-users] unsuccessful hosted engine install

2017-06-05 Thread Brendan Hartzell
As requested,

The output of ovirt-hosted-engine-cleanup

[root@node-1 ~]# ovirt-hosted-engine-cleanup
This will de-configure the host to run ovirt-hosted-engine-setup from
scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
 -=== Destroy hosted-engine VM ===-
You must run deploy first
 -=== Stop HA services ===-
 -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
 -=== Disconnecting the hosted-engine storage domain ===-
You must run deploy first
 -=== De-configure VDSM networks ===-
 -=== Stop other services ===-
 -=== De-configure external daemons ===-
 -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
? /etc/ovirt-hosted-engine/hosted-engine.conf already missing
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-key.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
? /etc/pki/CA/cacert.pem already missing
? /etc/pki/libvirt/*.pem already missing
? /etc/pki/libvirt/private/*.pem already missing
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
[root@node-1 ~]#

Output of sanlock client status:
[root@node-1 ~]# sanlock client status
[root@node-1 ~]#

Thank you for your help!

On Mon, Jun 5, 2017 at 7:25 AM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Jun 5, 2017 at 3:57 PM, Brendan Hartzell  wrote:
>
>> After letting this sit for a few days, does anyone have any ideas as to
>> how to deal with my situation?  Would anyone like me to send the SOS report
>> directly to them?  It's a 9MB file.
>>
>> If nothing comes up, I'm going to try and sift through the SOS report
>> tonight, but I won't know what I'm trying to find.
>>
>> Thank you for any and all help.
>>
>> On Thu, Jun 1, 2017 at 1:15 AM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> On Thu, Jun 1, 2017 at 6:36 AM, Brendan Hartzell 
>>> wrote:
>>>
 Ran the 4 commands listed above, no errors on the screen.

 Started the hosted-engine standard setup from the web-UI.

 Using iSCSI for the storage.

 Using mostly default options, I got these errors in the web-UI.

  Error creating Volume Group: Failed to initialize physical device:
 ("[u'/dev/mapper/36589cfc00de7482638fcfce4']",)
 Failed to execute stage 'Misc configuration': Failed to initialize
 physical device: ("[u'/dev/mapper/36589cfc0
 0de7482638fcfce4']",)
 Hosted Engine deployment failed: this system is not reliable, please
 check the issue,fix and redeploy

 I rebuilt my iSCSI (I don't think I cleaned it up from a previous
 install).
 Re-ran the above 4 commands.
 Restarted hosted engine standard setup from web-UI.
 Install moved past "Connecting Storage Pool" so I believe the above was
 my fault.

 These are the last messages displayed on the web-UI.
  Creating Storage Pool
 Connecting Storage Pool
 Verifying sanlock lockspace initialization
 Creating Image for 'hosted-engine.lockspace' ...
 Image for 'hosted-engine.lockspace' created successfully
 Creating Image for 'hosted-engine.metadata' ...
 Image for 'hosted-engine.metadata' created successfully
 Creating VM Image
 Extracting disk image from OVF archive (could take a few minutes
 depending on archive size)
 Validating pre-allocated volume size
 Uploading volume to data domain (could take a few minutes depending on
 archive size)

 At the host terminal, I got the error "watchdog watchdog0: watchdog did
 not stop!"
 Then the host restarted.

>>>
>>> Simone, can you help here?
>>>
>>>
> Ok, sorry for the delay.
> The second installation attempt seams fine but it seams that
> ovirt-hosted-engine-cleanup failed stopping sanlock and so the
> watchdog kick in rebooting your system in the middle of deployment attempt.
>
> could you please post the output of
>ovirt-hosted-engine-cleanup
>sanlock client status
> ?
>
>
>
>>
>>>
>>>

 This is as far as I've gotten in previous attempts.

 Attaching the hosted-engine-setup log.

 The SOS report is 9MB and the ovirt users group will drop the email.

 On Wed, May 31, 2017 at 6:59 AM, Sandro Bonazzola 
 wrote:

>
>
> On Wed, May 31, 2017 at 3:10 PM, Brendan Hartzell 
> wrote:
>
>> Now that you have identified the problem, should I run the following
>> commands and send you another SOS?
>>

Re: [ovirt-users] Seamless SAN HA failovers with oVirt?

2017-06-05 Thread Dan Yasny
As soon as yous NAS goes down, qemu running the VMs will start getting EIO
errors and VMs will pause, so as to not lose any data. If the NAS upgrade
isn't a very long procedure, you might as well complete the updates, enable
the NAS, and unpause the VMs.

On Mon, Jun 5, 2017 at 5:47 PM, Matthew Trent <
matthew.tr...@lewiscountywa.gov> wrote:

> I'm using two TrueNAS HA SANs (FreeBSD-based ZFS) to provide storage via
> NFS to 7 oVirt boxes and about 25 VMs.
>
> For SAN system upgrades I've always scheduled a maintenance window, shut
> down all the oVirt stuff, upgraded the SANs, and spun everything back up.
> It's pretty disruptive, but I assumed that was the thing to do.
>
> However, in talking with the TrueNAS vendor they said the majority of
> their customers are using VMWare and they almost always do TrueNAS updates
> in production. They just upgrade one head of the TrueNAS HA pair then
> failover to the other head and upgrade it too. There's a 30-ish second
> pause in I/O while the disk arrays are taken over by the other HA head, but
> VMWare just tolerates it and continues without skipping a beat. They say
> this is standard procedure in the SAN world and virtualization systems
> should tolerate 30-60 seconds of I/O pause for HA failovers seamlessly.
>
> It sounds great to me, but I wanted to pick this lists' brain -- is anyone
> doing this with oVirt? Are you able to failover your HA SAN with 30-60
> seconds of no I/O without oVirt freaking out?
>
> If not, are there any tunables relating to this? I see the default NFS
> mount options look fairly tolerant (proto=tcp,timeo=600,retrans=6), but
> are there VDSM or sanlock or some other oVirt timeouts that will kick in
> and start putting storage domains into error states, fencing hosts or
> something before that? I've never timed anything, but I want to say my past
> experience is that ovirt hosted engine started showing errors almost
> immediately when we've had SAN issues in the past.
>
> Thanks!
>
> --
> Matthew Trent
> Network Engineer
> Lewis County IT Services
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Seamless SAN HA failovers with oVirt?

2017-06-05 Thread Matthew Trent
I'm using two TrueNAS HA SANs (FreeBSD-based ZFS) to provide storage via NFS to 
7 oVirt boxes and about 25 VMs.

For SAN system upgrades I've always scheduled a maintenance window, shut down 
all the oVirt stuff, upgraded the SANs, and spun everything back up. It's 
pretty disruptive, but I assumed that was the thing to do.

However, in talking with the TrueNAS vendor they said the majority of their 
customers are using VMWare and they almost always do TrueNAS updates in 
production. They just upgrade one head of the TrueNAS HA pair then failover to 
the other head and upgrade it too. There's a 30-ish second pause in I/O while 
the disk arrays are taken over by the other HA head, but VMWare just tolerates 
it and continues without skipping a beat. They say this is standard procedure 
in the SAN world and virtualization systems should tolerate 30-60 seconds of 
I/O pause for HA failovers seamlessly.

It sounds great to me, but I wanted to pick this lists' brain -- is anyone 
doing this with oVirt? Are you able to failover your HA SAN with 30-60 seconds 
of no I/O without oVirt freaking out?

If not, are there any tunables relating to this? I see the default NFS mount 
options look fairly tolerant (proto=tcp,timeo=600,retrans=6), but are there 
VDSM or sanlock or some other oVirt timeouts that will kick in and start 
putting storage domains into error states, fencing hosts or something before 
that? I've never timed anything, but I want to say my past experience is that 
ovirt hosted engine started showing errors almost immediately when we've had 
SAN issues in the past.

Thanks!

--
Matthew Trent
Network Engineer
Lewis County IT Services
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Nested KVM for oVirt 4.1.2

2017-06-05 Thread ovirt

I want to test oVirt with real hardware, no more nested VMs.
3 hosts, each vm will be Fedora (maybe CentOS, I prefer Fedora)

Since I don't want a nested setup, I assume the engine will require a 
4th vm.

What is the install process?

On 2017-06-05 16:13, ov...@fateknollogee.com wrote:

I want to test oVirt with real hardware, no more nested VMs.
3 hosts, each vm will be Fedora (maybe CentOS, I prefer Fedora)
What is the install process?

On 2017-05-30 02:42, Sandro Bonazzola wrote:

On Tue, May 30, 2017 at 5:18 AM,  wrote:


Sandro,
If & when one decides to "graduate" & use real hardware, what is the
install process?


The install process depends on how you want to design your lab .
You can have a small deployment with just 3 hosts in hyperconverged
setup or a large datacenter with 200 hypervisors and one or more
dedicated SAN for the storage.
If you go with an hyperconverged setup, you can install oVirt Node on
3 hosts and then on one of them use cockpit to deploy gluster and
hosted engine on top of it in hyperconverged mode.

You can find an installation guide here:
http://www.ovirt.org/documentation/install-guide/Installation_Guide/


Is the gluster part still automated or that has to be done manually?


If you go with hyperconverged mode it's now automated. You can find
more info here:
http://www.ovirt.org/develop/release-management/features/gluster/gdeploy-cockpit-integration/
Sahina, please ensure above link is updated. I see it shows the
feature still in WIP while it's been released.


Another question, what type of use cases & jobs is oVirt being
deployed in & how are people getting tech support?


About use cases for oVirt you can find some examples here:
http://www.ovirt.org/community/user-stories/users-and-providers/
If you want dedicated support, I would recommend to get a Red Hat
Virtualization (which is oVirt with technical support and some
additions) subscription getting Red Hat support.
Another place to get support if you stay with oVirt is the community:
this mailing list, the IRC channel and social media, have a look here
for other contacts: http://www.ovirt.org/community/


On 2017-05-29 04:33, Sandro Bonazzola wrote:
On Mon, May 29, 2017 at 10:21 AM,  wrote:

I assume people are using oVirt in production?

Sure, I was just wondering why you were running in nested
virtualization :-)
Being your use case a "playground" environment, I can suggest you to
have a look at Lago http://lago.readthedocs.io/en/stable/ [1]
and at Lago demo at https://github.com/lago-project/lago-demo [2]
to help you preparing an isolated test environment for your
learning.

On 2017-05-29 04:13, Sandro Bonazzola wrote:
On Mon, May 29, 2017 at 12:12 AM,  wrote:



http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[3]
[1]
[1]

I have one CentOS7 host (physical) & 3x oVirt nodes 4.1.2 (these are
vm's).

Hi, can you please share the use case for this setup?

I have installed vdsm-hook-nestedvm on the host.

Should I install vdsm-hook-macspoof on the 3x node vm's?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [4] [2] [2]

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA [3]

[4]

TRIED. TESTED. TRUSTED. [5]

Links:
--
[1]



http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[3]
[1]
[2] http://lists.ovirt.org/mailman/listinfo/users [4] [2]
[3] https://www.redhat.com/
[4] https://red.ht/sig
[5] https://redhat.com/trusted

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA [3]

[4]

TRIED. TESTED. TRUSTED. [5]

Links:
--
[1]


http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[3]
[2] http://lists.ovirt.org/mailman/listinfo/users [4]
[3] https://www.redhat.com/
[4] https://red.ht/sig
[5] https://redhat.com/trusted


--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA [5]

 [6]

TRIED. TESTED. TRUSTED. [7]



Links:
--
[1] http://lago.readthedocs.io/en/stable/
[2] https://github.com/lago-project/lago-demo
[3] 
http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[4] http://lists.ovirt.org/mailman/listinfo/users
[5] https://www.redhat.com/
[6] https://red.ht/sig
[7] https://redhat.com/trusted

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Nested KVM for oVirt 4.1.2

2017-06-05 Thread ovirt

I want to test oVirt with real hardware, no more nested VMs.
3 hosts, each vm will be Fedora (maybe CentOS, I prefer Fedora)
What is the install process?

On 2017-05-30 02:42, Sandro Bonazzola wrote:

On Tue, May 30, 2017 at 5:18 AM,  wrote:


Sandro,
If & when one decides to "graduate" & use real hardware, what is the
install process?


The install process depends on how you want to design your lab .
You can have a small deployment with just 3 hosts in hyperconverged
setup or a large datacenter with 200 hypervisors and one or more
dedicated SAN for the storage.
If you go with an hyperconverged setup, you can install oVirt Node on
3 hosts and then on one of them use cockpit to deploy gluster and
hosted engine on top of it in hyperconverged mode.

You can find an installation guide here:
http://www.ovirt.org/documentation/install-guide/Installation_Guide/


Is the gluster part still automated or that has to be done manually?


If you go with hyperconverged mode it's now automated. You can find
more info here:
http://www.ovirt.org/develop/release-management/features/gluster/gdeploy-cockpit-integration/
Sahina, please ensure above link is updated. I see it shows the
feature still in WIP while it's been released.


Another question, what type of use cases & jobs is oVirt being
deployed in & how are people getting tech support?


About use cases for oVirt you can find some examples here:
http://www.ovirt.org/community/user-stories/users-and-providers/
If you want dedicated support, I would recommend to get a Red Hat
Virtualization (which is oVirt with technical support and some
additions) subscription getting Red Hat support.
Another place to get support if you stay with oVirt is the community:
this mailing list, the IRC channel and social media, have a look here
for other contacts: http://www.ovirt.org/community/


On 2017-05-29 04:33, Sandro Bonazzola wrote:
On Mon, May 29, 2017 at 10:21 AM,  wrote:

I assume people are using oVirt in production?

Sure, I was just wondering why you were running in nested
virtualization :-)
Being your use case a "playground" environment, I can suggest you to
have a look at Lago http://lago.readthedocs.io/en/stable/ [1]
and at Lago demo at https://github.com/lago-project/lago-demo [2]
to help you preparing an isolated test environment for your
learning.

On 2017-05-29 04:13, Sandro Bonazzola wrote:
On Mon, May 29, 2017 at 12:12 AM,  wrote:



http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[3]
[1]
[1]

I have one CentOS7 host (physical) & 3x oVirt nodes 4.1.2 (these are
vm's).

Hi, can you please share the use case for this setup?

I have installed vdsm-hook-nestedvm on the host.

Should I install vdsm-hook-macspoof on the 3x node vm's?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [4] [2] [2]

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA [3]

[4]

TRIED. TESTED. TRUSTED. [5]

Links:
--
[1]



http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[3]
[1]
[2] http://lists.ovirt.org/mailman/listinfo/users [4] [2]
[3] https://www.redhat.com/
[4] https://red.ht/sig
[5] https://redhat.com/trusted

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA [3]

[4]

TRIED. TESTED. TRUSTED. [5]

Links:
--
[1]


http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[3]
[2] http://lists.ovirt.org/mailman/listinfo/users [4]
[3] https://www.redhat.com/
[4] https://red.ht/sig
[5] https://redhat.com/trusted


--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA [5]

 [6]

TRIED. TESTED. TRUSTED. [7]



Links:
--
[1] http://lago.readthedocs.io/en/stable/
[2] https://github.com/lago-project/lago-demo
[3] 
http://community.redhat.com/blog/2013/08/testing-ovirt-3-3-with-nested-kvm/

[4] http://lists.ovirt.org/mailman/listinfo/users
[5] https://www.redhat.com/
[6] https://red.ht/sig
[7] https://redhat.com/trusted

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
Also when testing with dd i get the following:

*Testing on the gluster mount: *
dd if=/dev/zero
of=/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/test2.img
oflag=direct bs=512 count=1
dd: error writing
β/rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/test2.imgβ:
*Transport endpoint is not connected*
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00336755 s, 0.0 kB/s

*Testing on the /root directory (XFS): *
dd if=/dev/zero of=/test2.img oflag=direct bs=512 count=1
dd: error writing β/test2.imgβ:* Invalid argument*
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000321239 s, 0.0 kB/s

Seems that the gluster is trying to do the same and fails.



On Mon, Jun 5, 2017 at 10:10 PM, Abi Askushi 
wrote:

> The question that rises is what is needed to make gluster aware of the 4K
> physical sectors presented to it (the logical sector is also 4K). The
> offset (127488) at the log does not seem aligned at 4K.
>
> Alex
>
> On Mon, Jun 5, 2017 at 2:47 PM, Abi Askushi 
> wrote:
>
>> Hi Krutika,
>>
>> I am saying that I am facing this issue with 4k drives. I never
>> encountered this issue with 512 drives.
>>
>> Alex
>>
>> On Jun 5, 2017 14:26, "Krutika Dhananjay"  wrote:
>>
>>> This seems like a case of O_DIRECT reads and writes gone wrong, judging
>>> by the 'Invalid argument' errors.
>>>
>>> The two operations that have failed on gluster bricks are:
>>>
>>> [2017-06-05 09:40:39.428979] E [MSGID: 113072]
>>> [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0,
>>> [Invalid argument]
>>> [2017-06-05 09:41:00.865760] E [MSGID: 113040]
>>> [posix.c:3178:posix_readv] 0-engine-posix: read failed on
>>> gfid=8c94f658-ac3c-4e3a-b368-8c038513a914, fd=0x7f408584c06c,
>>> offset=127488 size=512, buf=0x7f4083c0b000 [Invalid argument]
>>>
>>> But then, both the write and the read have 512byte-aligned offset, size
>>> and buf address (which is correct).
>>>
>>> Are you saying you don't see this issue with 4K block-size?
>>>
>>> -Krutika
>>>
>>> On Mon, Jun 5, 2017 at 3:21 PM, Abi Askushi 
>>> wrote:
>>>
 Hi Sahina,

 Attached are the logs. Let me know if sth else is needed.

 I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K
 stripe size at the moment.
 I have prepared the storage as below:

 pvcreate --dataalignment 256K /dev/sda4
 vgcreate --physicalextentsize 256K gluster /dev/sda4

 lvcreate -n engine --size 120G gluster
 mkfs.xfs -f -i size=512 /dev/gluster/engine

 Thanx,
 Alex

 On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose  wrote:

> Can we have the gluster mount logs and brick logs to check if it's the
> same issue?
>
> On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
> wrote:
>
>> I clean installed everything and ran into the same.
>> I then ran gdeploy and encountered the same issue when deploying
>> engine.
>> Seems that gluster (?) doesn't like 4K sector drives. I am not sure
>> if it has to do with alignment. The weird thing is that gluster volumes 
>> are
>> all ok, replicating normally and no split brain is reported.
>>
>> The solution to the mentioned bug (1386443
>> ) was to format
>> with 512 sector size, which for my case is not an option:
>>
>> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
>> illegal sector size 512; hw sector is 4096
>>
>> Is there any workaround to address this?
>>
>> Thanx,
>> Alex
>>
>>
>> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
>> wrote:
>>
>>> Hi Maor,
>>>
>>> My disk are of 4K block size and from this bug seems that gluster
>>> replica needs 512B block size.
>>> Is there a way to make gluster function with 4K drives?
>>>
>>> Thank you!
>>>
>>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk 
>>> wrote:
>>>
 Hi Alex,

 I saw a bug that might be related to the issue you encountered at
 https://bugzilla.redhat.com/show_bug.cgi?id=1386443

 Sahina, maybe you have any advise? Do you think that BZ1386443is
 related?

 Regards,
 Maor

 On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi <
 rightkickt...@gmail.com> wrote:
 > Hi All,
 >
 > I have installed successfully several times oVirt (version 4.1)
 with 3 nodes
 > on top glusterfs.
 >
 > This time, when trying to configure the same setup, I am facing
 the
 > following issue which doesn't seem to go away. During
 installation i get the
 > error:
 >
 > Failed to execute stage 'Misc configuration': Cannot 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
The question that rises is what is needed to make gluster aware of the 4K
physical sectors presented to it (the logical sector is also 4K). The
offset (127488) at the log does not seem aligned at 4K.

Alex

On Mon, Jun 5, 2017 at 2:47 PM, Abi Askushi  wrote:

> Hi Krutika,
>
> I am saying that I am facing this issue with 4k drives. I never
> encountered this issue with 512 drives.
>
> Alex
>
> On Jun 5, 2017 14:26, "Krutika Dhananjay"  wrote:
>
>> This seems like a case of O_DIRECT reads and writes gone wrong, judging
>> by the 'Invalid argument' errors.
>>
>> The two operations that have failed on gluster bricks are:
>>
>> [2017-06-05 09:40:39.428979] E [MSGID: 113072]
>> [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0,
>> [Invalid argument]
>> [2017-06-05 09:41:00.865760] E [MSGID: 113040] [posix.c:3178:posix_readv]
>> 0-engine-posix: read failed on gfid=8c94f658-ac3c-4e3a-b368-8c038513a914,
>> fd=0x7f408584c06c, offset=127488 size=512, buf=0x7f4083c0b000 [Invalid
>> argument]
>>
>> But then, both the write and the read have 512byte-aligned offset, size
>> and buf address (which is correct).
>>
>> Are you saying you don't see this issue with 4K block-size?
>>
>> -Krutika
>>
>> On Mon, Jun 5, 2017 at 3:21 PM, Abi Askushi 
>> wrote:
>>
>>> Hi Sahina,
>>>
>>> Attached are the logs. Let me know if sth else is needed.
>>>
>>> I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K
>>> stripe size at the moment.
>>> I have prepared the storage as below:
>>>
>>> pvcreate --dataalignment 256K /dev/sda4
>>> vgcreate --physicalextentsize 256K gluster /dev/sda4
>>>
>>> lvcreate -n engine --size 120G gluster
>>> mkfs.xfs -f -i size=512 /dev/gluster/engine
>>>
>>> Thanx,
>>> Alex
>>>
>>> On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose  wrote:
>>>
 Can we have the gluster mount logs and brick logs to check if it's the
 same issue?

 On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
 wrote:

> I clean installed everything and ran into the same.
> I then ran gdeploy and encountered the same issue when deploying
> engine.
> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if
> it has to do with alignment. The weird thing is that gluster volumes are
> all ok, replicating normally and no split brain is reported.
>
> The solution to the mentioned bug (1386443
> ) was to format
> with 512 sector size, which for my case is not an option:
>
> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
> illegal sector size 512; hw sector is 4096
>
> Is there any workaround to address this?
>
> Thanx,
> Alex
>
>
> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
> wrote:
>
>> Hi Maor,
>>
>> My disk are of 4K block size and from this bug seems that gluster
>> replica needs 512B block size.
>> Is there a way to make gluster function with 4K drives?
>>
>> Thank you!
>>
>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk 
>> wrote:
>>
>>> Hi Alex,
>>>
>>> I saw a bug that might be related to the issue you encountered at
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>>
>>> Sahina, maybe you have any advise? Do you think that BZ1386443is
>>> related?
>>>
>>> Regards,
>>> Maor
>>>
>>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
>>> wrote:
>>> > Hi All,
>>> >
>>> > I have installed successfully several times oVirt (version 4.1)
>>> with 3 nodes
>>> > on top glusterfs.
>>> >
>>> > This time, when trying to configure the same setup, I am facing the
>>> > following issue which doesn't seem to go away. During installation
>>> i get the
>>> > error:
>>> >
>>> > Failed to execute stage 'Misc configuration': Cannot acquire host
>>> id:
>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22,
>>> 'Sanlock
>>> > lockspace add failure', 'Invalid argument'))
>>> >
>>> > The only different in this setup is that instead of standard
>>> partitioning i
>>> > have GPT partitioning and the disks have 4K block size instead of
>>> 512.
>>> >
>>> > The /var/log/sanlock.log has the following lines:
>>> >
>>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
>>> nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8
>>> -46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
>>> nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b
>>> 

[ovirt-users] disaster recovery, connecting to existing storage domains from new engine

2017-06-05 Thread Thomas Wakefield
Is it possible to force a connection to existing storage domains from a new 
engine?  I have a cluster that I can’t get the web console to restart, but 
everything else is working.  So I want to know if it’s possible to just remount 
those storage domains to the new engine?

Also, how do I clear an export domain that complains that it’s still attached 
to another pool?  Because I could also just restore most of my VM’s from the 
export domain, but it is still locked to the old engine.

Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a Storage Domain

2017-06-05 Thread Sandro Bonazzola
On Fri, Jun 2, 2017 at 6:16 PM, Langley, Robert 
wrote:

> Any progress?
>

Nir? Allon? Tal?


>
>
> One thing that has been going through my mind is whether oVirt allows a
> GlusterFS storage domain to work with multiple CPU types?
>

Sahina?


> The two dedicated GlusterFS storage servers are AMD Opteron. And the third
> server for the replica 3, which I am hoping I can also use as an oVirt host
> is a new Intel Xeon (Dell PE R830). I know GlusterFS allows for mixed
> hardware, but I’m not sure about when oVirt manages GlusterFS, if mixed
> hardware and mixed use has been accounted for?
>
>
>
> *From:* Simone Tiraboschi [mailto:stira...@redhat.com]
> *Sent:* Wednesday, May 31, 2017 8:41 AM
> *To:* Langley, Robert 
> *Cc:* Sandro Bonazzola ; Nir Soffer <
> nsof...@redhat.com>; Allon Mureinik ; Tal Nisan <
> tni...@redhat.com>; users 
>
> *Subject:* Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a
> Storage Domain
>
>
>
> It seams something related to IOProcess connection.
>
>
>
> 2017-05-25 20:54:40,362-0700 INFO  (jsonrpc/4) [IOProcessClient] Starting
> client ioprocess-3 (__init__:330)
>
> 2017-05-25 20:54:40,370-0700 INFO  (ioprocess/31239) [IOProcess] Starting
> ioprocess (__init__:452)
>
> 2017-05-25 20:54:40,407-0700 ERROR (jsonrpc/4) [storage.TaskManager.Task]
> (Task='89dd17d2-8a38-4825-9ba2-f231f1aff9f5') Unexpected error (task:870)
>
> Traceback (most recent call last):
>
>   File "/usr/share/vdsm/storage/task.py", line 877, in _run
>
> return fn(*args, **kargs)
>
>   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
> wrapper
>
> res = f(*args, **kwargs)
>
>   File "/usr/share/vdsm/storage/hsm.py", line 2581, in createStorageDomain
>
> storageType, domVersion)
>
>   File "/usr/share/vdsm/storage/nfsSD.py", line 87, in create
>
> remotePath, storageType, version)
>
>   File "/usr/share/vdsm/storage/fileSD.py", line 421, in _prepareMetadata
>
> procPool.fileUtils.createdir(metadataDir, 0o775)
>
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
> line 166, in createdir
>
> self._iop.mkdir(tmpPath, mode)
>
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line
> 524, in mkdir
>
> self.timeout)
>
>   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line
> 466, in _sendCommand
>
> raise OSError(errcode, errstr)
>
> OSError: [Errno 61] No data available
>
> 2017-05-25 20:54:40,409-0700 INFO  (jsonrpc/4) [storage.TaskManager.Task]
> (Task='89dd17d2-8a38-4825-9ba2-f231f1aff9f5') aborting: Task is aborted:
> u'[Errno 61] No data available' - code 100 (task:1175)
>
> 2017-05-25 20:54:40,409-0700 ERROR (jsonrpc/4) [storage.Dispatcher] [Errno
> 61] No data available (dispatcher:80)
>
> Traceback (most recent call last):
>
>   File "/usr/share/vdsm/storage/dispatcher.py", line 72, in wrapper
>
> result = ctask.prepare(func, *args, **kwargs)
>
>   File "/usr/share/vdsm/storage/task.py", line 105, in wrapper
>
> return m(self, *a, **kw)
>
>   File "/usr/share/vdsm/storage/task.py", line 1183, in prepare
>
> raise self.error
>
> OSError: [Errno 61] No data available
>
> 2017-05-25 20:54:40,410-0700 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
> call StorageDomain.create failed (error 351) in 0.24 seconds (__init__:533)
>
>
>
> Nir, any hint?
>
>
>
>
>
> On Wed, May 31, 2017 at 5:24 PM, Langley, Robert <
> robert.lang...@ventura.org> wrote:
>
> SOSReport attached, with md5. Thank you
>
>
>
>
>
> *From:* Sandro Bonazzola [mailto:sbona...@redhat.com]
> *Sent:* Wednesday, May 31, 2017 12:00 AM
> *To:* Langley, Robert ; Nir Soffer <
> nsof...@redhat.com>; Allon Mureinik ; Tal Nisan <
> tni...@redhat.com>; Simone Tiraboschi 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] Hosted-Engine Deploy - Error Creating a
> Storage Domain
>
>
>
>
>
>
>
> On Tue, May 30, 2017 at 11:50 PM, Langley, Robert <
> robert.lang...@ventura.org> wrote:
>
> While going through the hosted engine deployment, I am not able to have it
> successfully complete. Even going through the setup log, I’m not able to
> identify what is wrong. Why it thinks the system is not reliable.
>
>
>
> Traceback (most recent call last):
>
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in
> _executeMethod
>
> method['method']()
>
>   File "/usr/share/ovirt-hosted-engine-setup/scripts/../
> plugins/gr-he-setup/storage/storage.py", line 957, in _misc
>
> self._createStorageDomain()
>
>   File "/usr/share/ovirt-hosted-engine-setup/scripts/../
> plugins/gr-he-setup/storage/storage.py", line 546, in _createStorageDomain
>
> raise RuntimeError(status['status']['message'])
>
> RuntimeError: Error creating a storage domain: (u'storageType=7,
> sdUUID=a2494209-f823-4745-8eea-a122889d48f6, 

Re: [ovirt-users] unsuccessful hosted engine install

2017-06-05 Thread Simone Tiraboschi
On Mon, Jun 5, 2017 at 3:57 PM, Brendan Hartzell  wrote:

> After letting this sit for a few days, does anyone have any ideas as to
> how to deal with my situation?  Would anyone like me to send the SOS report
> directly to them?  It's a 9MB file.
>
> If nothing comes up, I'm going to try and sift through the SOS report
> tonight, but I won't know what I'm trying to find.
>
> Thank you for any and all help.
>
> On Thu, Jun 1, 2017 at 1:15 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Thu, Jun 1, 2017 at 6:36 AM, Brendan Hartzell 
>> wrote:
>>
>>> Ran the 4 commands listed above, no errors on the screen.
>>>
>>> Started the hosted-engine standard setup from the web-UI.
>>>
>>> Using iSCSI for the storage.
>>>
>>> Using mostly default options, I got these errors in the web-UI.
>>>
>>>  Error creating Volume Group: Failed to initialize physical device:
>>> ("[u'/dev/mapper/36589cfc00de7482638fcfce4']",)
>>> Failed to execute stage 'Misc configuration': Failed to initialize
>>> physical device: ("[u'/dev/mapper/36589cfc00de7482638fcfce4']",)
>>> Hosted Engine deployment failed: this system is not reliable, please
>>> check the issue,fix and redeploy
>>>
>>> I rebuilt my iSCSI (I don't think I cleaned it up from a previous
>>> install).
>>> Re-ran the above 4 commands.
>>> Restarted hosted engine standard setup from web-UI.
>>> Install moved past "Connecting Storage Pool" so I believe the above was
>>> my fault.
>>>
>>> These are the last messages displayed on the web-UI.
>>>  Creating Storage Pool
>>> Connecting Storage Pool
>>> Verifying sanlock lockspace initialization
>>> Creating Image for 'hosted-engine.lockspace' ...
>>> Image for 'hosted-engine.lockspace' created successfully
>>> Creating Image for 'hosted-engine.metadata' ...
>>> Image for 'hosted-engine.metadata' created successfully
>>> Creating VM Image
>>> Extracting disk image from OVF archive (could take a few minutes
>>> depending on archive size)
>>> Validating pre-allocated volume size
>>> Uploading volume to data domain (could take a few minutes depending on
>>> archive size)
>>>
>>> At the host terminal, I got the error "watchdog watchdog0: watchdog did
>>> not stop!"
>>> Then the host restarted.
>>>
>>
>> Simone, can you help here?
>>
>>
Ok, sorry for the delay.
The second installation attempt seams fine but it seams that
ovirt-hosted-engine-cleanup failed stopping sanlock and so the
watchdog kick in rebooting your system in the middle of deployment attempt.

could you please post the output of
   ovirt-hosted-engine-cleanup
   sanlock client status
?



>
>>
>>
>>>
>>> This is as far as I've gotten in previous attempts.
>>>
>>> Attaching the hosted-engine-setup log.
>>>
>>> The SOS report is 9MB and the ovirt users group will drop the email.
>>>
>>> On Wed, May 31, 2017 at 6:59 AM, Sandro Bonazzola 
>>> wrote:
>>>


 On Wed, May 31, 2017 at 3:10 PM, Brendan Hartzell 
 wrote:

> Now that you have identified the problem, should I run the following
> commands and send you another SOS?
>
> ovirt-hosted-engine-cleanup
> vdsm-tool configure --force
> systemctl restart libvirtd
> systemctl restart vdsm
>
> Or is there a different plan in mind?
>

 I would have expected someone from virt team to follow up for further
 investigations :-)
 above commands should work.



>
> Thank you,
>
> Brendan
>
> On Tue, May 30, 2017 at 11:42 PM, Sandro Bonazzola <
> sbona...@redhat.com> wrote:
>
>>
>>
>> On Wed, May 31, 2017 at 4:45 AM, Brendan Hartzell 
>> wrote:
>>
>>> Can you please elaborate about the failure you see here and how are
>>> you trying to manually partition the host?
>>>
>>> Sure, I will start from the beginning.
>>> - Using: ovirt-node-ng-installer-ovirt-4.1-2017052604
>>> <(201)%20705-2604>.iso
>>> - During installation I setup one of the two interfaces and check
>>> the box to automatically use the connection.
>>> - I'm currently providing a host name of node-1.test.net until I
>>> have a successful process.
>>> - I configure date and time for my timezone and to use an internal
>>> NTP server.
>>> - On Installation Destination, I pick my 128GB USB3.0 SanDisk flash
>>> drive, check the box that I would like to make additional space, and 
>>> click
>>> done.  In the reclaim disk space window, I click delete all, and then
>>> reclaim space.  I go back into the Installation Destination, select 
>>> that I
>>> will configure partitioning, and click done.  The Manual Partitioning
>>> window opens, I use the option to automatically create mount points.
>>>
>>
>> In this screen, please change partitioning scheme from LVM to LVM
>> Thin Provisioning: it should solve your following error.

Re: [ovirt-users] windows 2016 drivers

2017-06-05 Thread suporte
Hi, 

Ok, thanks, I'll try that one. 


De: "Lev Veyde"  
Para: supo...@logicworks.pt 
Cc: "ovirt users"  
Enviadas: Segunda-feira, 5 De Junho de 2017 15:15:56 
Assunto: Re: [ovirt-users] windows 2016 drivers 

Hi, 

The latest version and the one that support 2016 is 4.1-5. 

You can get it from here: 
http://plain.resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/oVirt-toolsSetup-4.1-5.fc24.iso
 

The issue is that it's currently missing from el7 thus the issue. 

On Mon, Jun 5, 2017 at 5:02 PM, Lev Veyde < lve...@redhat.com > wrote: 



It should be supported, we'll need to check that. 


On Mon, Jun 5, 2017 at 4:42 PM, < supo...@logicworks.pt > wrote: 

BQ_BEGIN

Hi, 

On Version 4.1.1.8-1.el7.centos,when trying to install 
oirt-toolsSetup-4.1-3.fc24 on windows server 2016 get an error saying operating 
system not compatible. 
I was able to install oVirt-toolsSetup_4.0-1.fc223 but when trying to transfer 
files to the VM the network gets to slow until it stops. 

Is w2016 still not supported? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 







-- 



Lev Veyde 

Software Engineer , RHCE | RHCVA | MCITP 

Red Hat Israel 




l...@redhat.com | lve...@redhat.com 
TRIED. TESTED. TRUSTED. 

BQ_END




-- 



Lev Veyde 

Software Engineer , RHCE | RHCVA | MCITP 

Red Hat Israel 




l...@redhat.com | lve...@redhat.com 
TRIED. TESTED. TRUSTED. 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] windows 2016 drivers

2017-06-05 Thread Lev Veyde
Hi,

The latest version and the one that support 2016 is 4.1-5.

You can get it from here:
http://plain.resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/oVirt-toolsSetup-4.1-5.fc24.iso

The issue is that it's currently missing from el7 thus the issue.

On Mon, Jun 5, 2017 at 5:02 PM, Lev Veyde  wrote:

> It should be supported, we'll need to check that.
>
>
> On Mon, Jun 5, 2017 at 4:42 PM,  wrote:
>
>> Hi,
>>
>> On Version 4.1.1.8-1.el7.centos,when trying to install
>> oirt-toolsSetup-4.1-3.fc24 on windows server 2016 get an error saying
>> operating system not compatible.
>> I was able to install oVirt-toolsSetup_4.0-1.fc223 but when trying to
>> transfer files to the VM the network gets to slow until it stops.
>>
>> Is w2016 still not supported?
>>
>> Thanks
>>
>> --
>> --
>> Jose Ferradeira
>> http://www.logicworks.pt
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>



-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] windows 2016 drivers

2017-06-05 Thread Petr Matyáš

Hello,

when using guest tools from here [1] it really complains about 
unsupported Windows version.


But there is different version downstream, so maybe this is not latest 
version?


[1] - http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/


On 06/05/2017 04:02 PM, Lev Veyde wrote:

It should be supported, we'll need to check that.


On Mon, Jun 5, 2017 at 4:42 PM, > wrote:


Hi,

On Version 4.1.1.8-1.el7.centos,when trying to install
oirt-toolsSetup-4.1-3.fc24 on windows server 2016 get an error
saying operating system not compatible.
I was able to install oVirt-toolsSetup_4.0-1.fc223 but when trying
to transfer files to the VM the network gets to slow until it stops.

Is w2016 still not supported?

Thanks

-- 


Jose Ferradeira
http://www.logicworks.pt

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com  | lve...@redhat.com 




TRIED. TESTED. TRUSTED. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] windows 2016 drivers

2017-06-05 Thread Lev Veyde
It should be supported, we'll need to check that.


On Mon, Jun 5, 2017 at 4:42 PM,  wrote:

> Hi,
>
> On Version 4.1.1.8-1.el7.centos,when trying to install
> oirt-toolsSetup-4.1-3.fc24 on windows server 2016 get an error saying
> operating system not compatible.
> I was able to install oVirt-toolsSetup_4.0-1.fc223 but when trying to
> transfer files to the VM the network gets to slow until it stops.
>
> Is w2016 still not supported?
>
> Thanks
>
> --
> --
> Jose Ferradeira
> http://www.logicworks.pt
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually copying VM disks from FC data domain

2017-06-05 Thread Adam Litke
Hmm, try:
   pvscan /dev/vdc

Use fdisk to see if partitions can be read.  Also, when you copied the data
with dd, did you use the oflag=fsync option?  If not, your data could be
cached in memory waiting to be flushed to disk even though the dd command
completed.  Try the copy again with oflag=fsync.


On Fri, Jun 2, 2017 at 8:31 PM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

>
>
> 02.06.2017, 16:09, "Adam Litke" :
>
> hmm, strange.  So all three of those missing volumes have associated LVs.
> Can you try to activate them and see if you can read from them?
>
>   lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba-
> dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe
>   dd 
> if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe
> of=/dev/null bs=1M count=1
>
>
> # lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba-
> dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe
> WARNING: lvmetad is running but disabled. Restart lvmetad before enabling
> it!
>
>
> # dd 
> if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe
> of=/dev/null bs=1M count=1
>
>
> 1+0 записей получено
>
>
>
> 1+0 записей отправлено
>
>
>
> скопировано 1048576 байт (1,0 MB), 0,00918524 c, 114 MB/c
>
> # vchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba-
> dca6ffb7b67e/03917876-0e28-4457-bf44-53c7ea2b4d12
>   WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!
> # dd 
> if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/03917876-0e28-4457-bf44-53c7ea2b4d12
> of=/dev/null bs=1M count=1
> 1+0 записей получено
> 1+0 записей отправлено
>  скопировано 1048576 байт (1,0 MB), 0,00440839 c, 238 MB/c
>
> # lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba-
> dca6ffb7b67e/fd8822ee-4fc9-49ba-9760-87a85d56bf91
>   WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!
> # dd 
> if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/fd8822ee-4fc9-49ba-9760-87a85d56bf91
> of=/dev/null bs=1M count=1
> 1+0 записей получено
> 1+0 записей отправлено
>  скопировано 1048576 байт (1,0 MB), 0,00448938 c, 234 MB/c
>
> Well, read operation is OK.
>
>
>
>
> If this works, then one way to recover the data is to use the UI to create
> new disks of the same size as the old ones.  Then, activate the LVs
> associated with the old volumes and the new ones.  Then use dd (or qemu-img
> convert) to copy from old to new.  Then attach the new disks to your VM.
>
>
> I have create a new disk in the UI and activate it.
>
> 6050885b-5dd5-476c-b907-4ce2b3f37b0a : {"DiskAlias":"r13-sed-app_
> Disk1-recovery","DiskDescription":""}.
> # lvchange --config 'global {use_lvmetad=0}' -ay 0fcd2921-8a55-4ff7-9cba-
> dca6ffb7b67e/6050885b-5dd5-476c-b907-4ce2b3f37b0a
>
> Copy by dd from old DiskAlias:r13-sed-app_Disk1 to new
> DiskAlias:r13-sed-app_Disk1-recovery
>
> # dd 
> if=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/3b089aed-b3e1-4423-8585-e65752d19ffe
> of=/dev/0fcd2921-8a55-4ff7-9cba-dca6ffb7b67e/6050885b-5dd5-476c-b907-4ce2b3f37b0a
> status=progress
>  скопировано 18215707136 байт (18 GB), 496,661644 s, 36,7 MB/ss
> 35651584+0 записей получено
> 35651584+0 записей отправлено
>  скопировано 18253611008 <(825)%20361-1008> байт (18 GB), 502,111 c, 36,4
> MB/c
>
> Add new disk to existing VM (vdc). But I can't see LVM volumes on this
> disk. Where can I be wrong?
>
> # lsblk
> NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> sr011:01 1024M  0 rom
> vda   252:00   50G  0 disk
> ├─vda1252:10  200M  0 part /boot
> └─vda2252:20 49,8G  0 part
>   ├─vg_r34seddb-LogVol03 (dm-0)   253:00 26,8G  0 lvm  /
>   ├─vg_r34seddb-LogVol02 (dm-1)   253:108G  0 lvm  [SWAP]
>   ├─vg_r34seddb-LogVol01 (dm-3)   253:305G  0 lvm  /tmp
>   └─vg_r34seddb-LogVol00 (dm-4)   253:40   10G  0 lvm  /var
> vdb   252:16   0 1000G  0 disk
> └─vdb1252:17   0 1000G  0 part
>   └─vg_r34seddb00-LogVol00 (dm-2) 253:20 1000G  0 lvm  /var/lib/pgsql
> vdc   252:32   0   50G  0 disk
>
>
>
> On Thu, Jun 1, 2017 at 6:44 PM, Николаев Алексей <
> alexeynikolaev.p...@yandex.ru> wrote:
>
> Thx for your help!
>
> 01.06.2017, 16:46, "Adam Litke" :
>
> When you say "not visible in oVirt" you mean that you do not see them in
> the UI?
>
>
> Yes, I can see some VM with prefix "external-" and without disks.
>
>
> Do you know the specific uuids for the missing volumes?  You could use lvm
> to check if the LVs are visible to the host.
>
> lvs --config 'global {use_lvmetad=0}' -o +tags
>
> For each LV, the tag beginning with IU_ indicates the image the volume
> belongs to.
>
>
>   LV   VG
> Attr   LSizePool Origin Data%  

Re: [ovirt-users] unsuccessful hosted engine install

2017-06-05 Thread Brendan Hartzell
After letting this sit for a few days, does anyone have any ideas as to how
to deal with my situation?  Would anyone like me to send the SOS report
directly to them?  It's a 9MB file.

If nothing comes up, I'm going to try and sift through the SOS report
tonight, but I won't know what I'm trying to find.

Thank you for any and all help.

On Thu, Jun 1, 2017 at 1:15 AM, Sandro Bonazzola 
wrote:

>
>
> On Thu, Jun 1, 2017 at 6:36 AM, Brendan Hartzell  wrote:
>
>> Ran the 4 commands listed above, no errors on the screen.
>>
>> Started the hosted-engine standard setup from the web-UI.
>>
>> Using iSCSI for the storage.
>>
>> Using mostly default options, I got these errors in the web-UI.
>>
>>  Error creating Volume Group: Failed to initialize physical device:
>> ("[u'/dev/mapper/36589cfc00de7482638fcfce4']",)
>> Failed to execute stage 'Misc configuration': Failed to initialize
>> physical device: ("[u'/dev/mapper/36589cfc00de7482638fcfce4']",)
>> Hosted Engine deployment failed: this system is not reliable, please
>> check the issue,fix and redeploy
>>
>> I rebuilt my iSCSI (I don't think I cleaned it up from a previous
>> install).
>> Re-ran the above 4 commands.
>> Restarted hosted engine standard setup from web-UI.
>> Install moved past "Connecting Storage Pool" so I believe the above was
>> my fault.
>>
>> These are the last messages displayed on the web-UI.
>>  Creating Storage Pool
>> Connecting Storage Pool
>> Verifying sanlock lockspace initialization
>> Creating Image for 'hosted-engine.lockspace' ...
>> Image for 'hosted-engine.lockspace' created successfully
>> Creating Image for 'hosted-engine.metadata' ...
>> Image for 'hosted-engine.metadata' created successfully
>> Creating VM Image
>> Extracting disk image from OVF archive (could take a few minutes
>> depending on archive size)
>> Validating pre-allocated volume size
>> Uploading volume to data domain (could take a few minutes depending on
>> archive size)
>>
>> At the host terminal, I got the error "watchdog watchdog0: watchdog did
>> not stop!"
>> Then the host restarted.
>>
>
> Simone, can you help here?
>
>
>
>
>>
>> This is as far as I've gotten in previous attempts.
>>
>> Attaching the hosted-engine-setup log.
>>
>> The SOS report is 9MB and the ovirt users group will drop the email.
>>
>> On Wed, May 31, 2017 at 6:59 AM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> On Wed, May 31, 2017 at 3:10 PM, Brendan Hartzell 
>>> wrote:
>>>
 Now that you have identified the problem, should I run the following
 commands and send you another SOS?

 ovirt-hosted-engine-cleanup
 vdsm-tool configure --force
 systemctl restart libvirtd
 systemctl restart vdsm

 Or is there a different plan in mind?

>>>
>>> I would have expected someone from virt team to follow up for further
>>> investigations :-)
>>> above commands should work.
>>>
>>>
>>>

 Thank you,

 Brendan

 On Tue, May 30, 2017 at 11:42 PM, Sandro Bonazzola  wrote:

>
>
> On Wed, May 31, 2017 at 4:45 AM, Brendan Hartzell 
> wrote:
>
>> Can you please elaborate about the failure you see here and how are
>> you trying to manually partition the host?
>>
>> Sure, I will start from the beginning.
>> - Using: ovirt-node-ng-installer-ovirt-4.1-2017052604
>> <(201)%20705-2604>.iso
>> - During installation I setup one of the two interfaces and check the
>> box to automatically use the connection.
>> - I'm currently providing a host name of node-1.test.net until I
>> have a successful process.
>> - I configure date and time for my timezone and to use an internal
>> NTP server.
>> - On Installation Destination, I pick my 128GB USB3.0 SanDisk flash
>> drive, check the box that I would like to make additional space, and 
>> click
>> done.  In the reclaim disk space window, I click delete all, and then
>> reclaim space.  I go back into the Installation Destination, select that 
>> I
>> will configure partitioning, and click done.  The Manual Partitioning
>> window opens, I use the option to automatically create mount points.
>>
>
> In this screen, please change partitioning scheme from LVM to LVM Thin
> Provisioning: it should solve your following error.
>
>
>
>
>>   At this point, /boot is 1024MB, /var is 15GB, / is 88.11 GB, and
>> swap is 11.57GB.  I then change / to 23.11 GB, update settings, change 
>> /var
>> to 80GB, update settings again, and click done.  I accept the changes and
>> begin installation.
>>
>> I tried these changes based on this article: http://www.ovirt.org/
>> documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
>>
>> The article does say that you can specify a different directory than
>> /var/tmp, 

[ovirt-users] windows 2016 drivers

2017-06-05 Thread suporte
Hi, 

On Version 4.1.1.8-1.el7.centos,when trying to install 
oirt-toolsSetup-4.1-3.fc24 on windows server 2016 get an error saying operating 
system not compatible. 
I was able to install oVirt-toolsSetup_4.0-1.fc223 but when trying to transfer 
files to the VM the network gets to slow until it stops. 

Is w2016 still not supported? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
Hi Krutika,

I am saying that I am facing this issue with 4k drives. I never encountered
this issue with 512 drives.

Alex

On Jun 5, 2017 14:26, "Krutika Dhananjay"  wrote:

> This seems like a case of O_DIRECT reads and writes gone wrong, judging by
> the 'Invalid argument' errors.
>
> The two operations that have failed on gluster bricks are:
>
> [2017-06-05 09:40:39.428979] E [MSGID: 113072] [posix.c:3453:posix_writev]
> 0-engine-posix: write failed: offset 0, [Invalid argument]
> [2017-06-05 09:41:00.865760] E [MSGID: 113040] [posix.c:3178:posix_readv]
> 0-engine-posix: read failed on gfid=8c94f658-ac3c-4e3a-b368-8c038513a914,
> fd=0x7f408584c06c, offset=127488 size=512, buf=0x7f4083c0b000 [Invalid
> argument]
>
> But then, both the write and the read have 512byte-aligned offset, size
> and buf address (which is correct).
>
> Are you saying you don't see this issue with 4K block-size?
>
> -Krutika
>
> On Mon, Jun 5, 2017 at 3:21 PM, Abi Askushi 
> wrote:
>
>> Hi Sahina,
>>
>> Attached are the logs. Let me know if sth else is needed.
>>
>> I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K
>> stripe size at the moment.
>> I have prepared the storage as below:
>>
>> pvcreate --dataalignment 256K /dev/sda4
>> vgcreate --physicalextentsize 256K gluster /dev/sda4
>>
>> lvcreate -n engine --size 120G gluster
>> mkfs.xfs -f -i size=512 /dev/gluster/engine
>>
>> Thanx,
>> Alex
>>
>> On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose  wrote:
>>
>>> Can we have the gluster mount logs and brick logs to check if it's the
>>> same issue?
>>>
>>> On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
>>> wrote:
>>>
 I clean installed everything and ran into the same.
 I then ran gdeploy and encountered the same issue when deploying
 engine.
 Seems that gluster (?) doesn't like 4K sector drives. I am not sure if
 it has to do with alignment. The weird thing is that gluster volumes are
 all ok, replicating normally and no split brain is reported.

 The solution to the mentioned bug (1386443
 ) was to format
 with 512 sector size, which for my case is not an option:

 mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
 illegal sector size 512; hw sector is 4096

 Is there any workaround to address this?

 Thanx,
 Alex


 On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
 wrote:

> Hi Maor,
>
> My disk are of 4K block size and from this bug seems that gluster
> replica needs 512B block size.
> Is there a way to make gluster function with 4K drives?
>
> Thank you!
>
> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk 
> wrote:
>
>> Hi Alex,
>>
>> I saw a bug that might be related to the issue you encountered at
>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>
>> Sahina, maybe you have any advise? Do you think that BZ1386443is
>> related?
>>
>> Regards,
>> Maor
>>
>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
>> wrote:
>> > Hi All,
>> >
>> > I have installed successfully several times oVirt (version 4.1)
>> with 3 nodes
>> > on top glusterfs.
>> >
>> > This time, when trying to configure the same setup, I am facing the
>> > following issue which doesn't seem to go away. During installation
>> i get the
>> > error:
>> >
>> > Failed to execute stage 'Misc configuration': Cannot acquire host
>> id:
>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22,
>> 'Sanlock
>> > lockspace add failure', 'Invalid argument'))
>> >
>> > The only different in this setup is that instead of standard
>> partitioning i
>> > have GPT partitioning and the disks have 4K block size instead of
>> 512.
>> >
>> > The /var/log/sanlock.log has the following lines:
>> >
>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
>> nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8
>> -46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
>> nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b
>> 8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>> > for 2,9,23040
>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/m
>> nt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8
>> b4d5e5e922/dom_md/ids:0
>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Krutika Dhananjay
This seems like a case of O_DIRECT reads and writes gone wrong, judging by
the 'Invalid argument' errors.

The two operations that have failed on gluster bricks are:

[2017-06-05 09:40:39.428979] E [MSGID: 113072] [posix.c:3453:posix_writev]
0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-05 09:41:00.865760] E [MSGID: 113040] [posix.c:3178:posix_readv]
0-engine-posix: read failed on gfid=8c94f658-ac3c-4e3a-b368-8c038513a914,
fd=0x7f408584c06c, offset=127488 size=512, buf=0x7f4083c0b000 [Invalid
argument]

But then, both the write and the read have 512byte-aligned offset, size and
buf address (which is correct).

Are you saying you don't see this issue with 4K block-size?

-Krutika

On Mon, Jun 5, 2017 at 3:21 PM, Abi Askushi  wrote:

> Hi Sahina,
>
> Attached are the logs. Let me know if sth else is needed.
>
> I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K stripe
> size at the moment.
> I have prepared the storage as below:
>
> pvcreate --dataalignment 256K /dev/sda4
> vgcreate --physicalextentsize 256K gluster /dev/sda4
>
> lvcreate -n engine --size 120G gluster
> mkfs.xfs -f -i size=512 /dev/gluster/engine
>
> Thanx,
> Alex
>
> On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose  wrote:
>
>> Can we have the gluster mount logs and brick logs to check if it's the
>> same issue?
>>
>> On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
>> wrote:
>>
>>> I clean installed everything and ran into the same.
>>> I then ran gdeploy and encountered the same issue when deploying engine.
>>> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if
>>> it has to do with alignment. The weird thing is that gluster volumes are
>>> all ok, replicating normally and no split brain is reported.
>>>
>>> The solution to the mentioned bug (1386443
>>> ) was to format
>>> with 512 sector size, which for my case is not an option:
>>>
>>> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
>>> illegal sector size 512; hw sector is 4096
>>>
>>> Is there any workaround to address this?
>>>
>>> Thanx,
>>> Alex
>>>
>>>
>>> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
>>> wrote:
>>>
 Hi Maor,

 My disk are of 4K block size and from this bug seems that gluster
 replica needs 512B block size.
 Is there a way to make gluster function with 4K drives?

 Thank you!

 On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk 
 wrote:

> Hi Alex,
>
> I saw a bug that might be related to the issue you encountered at
> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>
> Sahina, maybe you have any advise? Do you think that BZ1386443is
> related?
>
> Regards,
> Maor
>
> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
> wrote:
> > Hi All,
> >
> > I have installed successfully several times oVirt (version 4.1) with
> 3 nodes
> > on top glusterfs.
> >
> > This time, when trying to configure the same setup, I am facing the
> > following issue which doesn't seem to go away. During installation i
> get the
> > error:
> >
> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22,
> 'Sanlock
> > lockspace add failure', 'Invalid argument'))
> >
> > The only different in this setup is that instead of standard
> partitioning i
> > have GPT partitioning and the disks have 4K block size instead of
> 512.
> >
> > The /var/log/sanlock.log has the following lines:
> >
> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
> nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8
> -46e7-b2c8-91e4a5bb2047/dom_md/ids:0
> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
> nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b
> 8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
> > for 2,9,23040
> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/m
> nt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8
> b4d5e5e922/dom_md/ids:0
> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader
> offset
> > 127488 rv -22
> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e
> 7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
> > 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Abi Askushi
Hi Sahina,

Attached are the logs. Let me know if sth else is needed.

I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K stripe
size at the moment.
I have prepared the storage as below:

pvcreate --dataalignment 256K /dev/sda4
vgcreate --physicalextentsize 256K gluster /dev/sda4

lvcreate -n engine --size 120G gluster
mkfs.xfs -f -i size=512 /dev/gluster/engine

Thanx,
Alex

On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose  wrote:

> Can we have the gluster mount logs and brick logs to check if it's the
> same issue?
>
> On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
> wrote:
>
>> I clean installed everything and ran into the same.
>> I then ran gdeploy and encountered the same issue when deploying engine.
>> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it
>> has to do with alignment. The weird thing is that gluster volumes are all
>> ok, replicating normally and no split brain is reported.
>>
>> The solution to the mentioned bug (1386443
>> ) was to format
>> with 512 sector size, which for my case is not an option:
>>
>> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
>> illegal sector size 512; hw sector is 4096
>>
>> Is there any workaround to address this?
>>
>> Thanx,
>> Alex
>>
>>
>> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
>> wrote:
>>
>>> Hi Maor,
>>>
>>> My disk are of 4K block size and from this bug seems that gluster
>>> replica needs 512B block size.
>>> Is there a way to make gluster function with 4K drives?
>>>
>>> Thank you!
>>>
>>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk 
>>> wrote:
>>>
 Hi Alex,

 I saw a bug that might be related to the issue you encountered at
 https://bugzilla.redhat.com/show_bug.cgi?id=1386443

 Sahina, maybe you have any advise? Do you think that BZ1386443is
 related?

 Regards,
 Maor

 On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
 wrote:
 > Hi All,
 >
 > I have installed successfully several times oVirt (version 4.1) with
 3 nodes
 > on top glusterfs.
 >
 > This time, when trying to configure the same setup, I am facing the
 > following issue which doesn't seem to go away. During installation i
 get the
 > error:
 >
 > Failed to execute stage 'Misc configuration': Cannot acquire host id:
 > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22,
 'Sanlock
 > lockspace add failure', 'Invalid argument'))
 >
 > The only different in this setup is that instead of standard
 partitioning i
 > have GPT partitioning and the disks have 4K block size instead of 512.
 >
 > The /var/log/sanlock.log has the following lines:
 >
 > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
 > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
 nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8
 -46e7-b2c8-91e4a5bb2047/dom_md/ids:0
 > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
 > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
 nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b
 8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
 > for 2,9,23040
 > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
 > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/m
 nt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8
 b4d5e5e922/dom_md/ids:0
 > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
 > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
 > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader
 offset
 > 127488 rv -22
 > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e
 7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
 > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
 > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
 > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result
 -22
 >
 > And /var/log/vdsm/vdsm.log says:
 >
 > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
 > [storage.StorageServer.MountConnection] Using user specified
 > backup-volfile-servers option (storageServer:253)
 > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
 > available. (throttledlog:105)
 > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
 > available, KSM stats will be missing. (throttledlog:105)
 > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
 > [storage.StorageServer.MountConnection] Using user specified
 > backup-volfile-servers option (storageServer:253)
 > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock]
 Cannot
 > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
 > 

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Sahina Bose
Can we have the gluster mount logs and brick logs to check if it's the same
issue?

On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
wrote:

> I clean installed everything and ran into the same.
> I then ran gdeploy and encountered the same issue when deploying engine.
> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it
> has to do with alignment. The weird thing is that gluster volumes are all
> ok, replicating normally and no split brain is reported.
>
> The solution to the mentioned bug (1386443
> ) was to format with
> 512 sector size, which for my case is not an option:
>
> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
> illegal sector size 512; hw sector is 4096
>
> Is there any workaround to address this?
>
> Thanx,
> Alex
>
>
> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
> wrote:
>
>> Hi Maor,
>>
>> My disk are of 4K block size and from this bug seems that gluster replica
>> needs 512B block size.
>> Is there a way to make gluster function with 4K drives?
>>
>> Thank you!
>>
>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk  wrote:
>>
>>> Hi Alex,
>>>
>>> I saw a bug that might be related to the issue you encountered at
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>>
>>> Sahina, maybe you have any advise? Do you think that BZ1386443is related?
>>>
>>> Regards,
>>> Maor
>>>
>>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
>>> wrote:
>>> > Hi All,
>>> >
>>> > I have installed successfully several times oVirt (version 4.1) with 3
>>> nodes
>>> > on top glusterfs.
>>> >
>>> > This time, when trying to configure the same setup, I am facing the
>>> > following issue which doesn't seem to go away. During installation i
>>> get the
>>> > error:
>>> >
>>> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22,
>>> 'Sanlock
>>> > lockspace add failure', 'Invalid argument'))
>>> >
>>> > The only different in this setup is that instead of standard
>>> partitioning i
>>> > have GPT partitioning and the disks have 4K block size instead of 512.
>>> >
>>> > The /var/log/sanlock.log has the following lines:
>>> >
>>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
>>> nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-
>>> c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
>>> nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-
>>> c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>>> > for 2,9,23040
>>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/m
>>> nt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-
>>> c8b4d5e5e922/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>>> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader
>>> offset
>>> > 127488 rv -22
>>> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e
>>> 7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
>>> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
>>> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
>>> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>>> >
>>> > And /var/log/vdsm/vdsm.log says:
>>> >
>>> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available. (throttledlog:105)
>>> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available, KSM stats will be missing. (throttledlog:105)
>>> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock]
>>> Cannot
>>> > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
>>> > (clusterlock:238)
>>> > Traceback (most recent call last):
>>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>>> line
>>> > 234, in initSANLock
>>> > sanlock.init_lockspace(sdUUID, idsPath)
>>> > SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
>>> > endpoint is not connected')
>>> > 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
>>> > [storage.StorageDomainManifest] lease did not initialize successfully
>>> > (sd:557)
>>> > Traceback (most recent call last):
>>> >   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
>>> > 

Re: [ovirt-users] Can't access ovirt-engine webpage

2017-06-05 Thread Thomas Wakefield

> On Jun 5, 2017, at 2:24 AM, Yedidyah Bar David  wrote:
> 
> On Mon, Jun 5, 2017 at 4:44 AM, Thomas Wakefield  wrote:
>> After a reboot I can’t access the management ovirt-engine webpage anymore.
>> 
>> 
>> server.log line that looks bad:
>> 
>> 2017-06-04 21:35:55,652-04 ERROR 
>> [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 
>> 2) WFLYCTL0190: Step handler 
>> org.jboss.as.server.deployment.DeploymentHandlerUtil$5@61b686ed for 
>> operation {"operation" => "undeploy","address" => [("deployment" => 
>> "engine.ear")],"owner" => [("subsystem" => "deployment-scanner"),("scanner" 
>> => "default")]} at address [("deployment" => "engine.ear")] failed handling 
>> operation rollback -- java.lang.IllegalStateException: WFLYCTL0345: Timeout 
>> after 5 seconds waiting for existing service service 
>> jboss.deployment.unit."engine.ear".contents to be removed so a new instance 
>> can be installed.: java.lang.IllegalStateException: WFLYCTL0345: Timeout 
>> after 5 seconds waiting for existing service service 
>> jboss.deployment.unit."engine.ear".contents to be removed so a new instance 
>> can be installed.
>>at 
>> org.jboss.as.controller.OperationContextImpl$ContextServiceBuilder.install(OperationContextImpl.java:2107)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.server.deployment.PathContentServitor.addService(PathContentServitor.java:50)
>>  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.server.deployment.DeploymentHandlerUtil.doDeploy(DeploymentHandlerUtil.java:165)
>>  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.server.deployment.DeploymentHandlerUtil$5$1.handleResult(DeploymentHandlerUtil.java:333)
>>  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext$Step.invokeResultHandler(AbstractOperationContext.java:1384)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1366)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:1328)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:1311)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext$Step.access$300(AbstractOperationContext.java:1185)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext.executeResultHandlerPhase(AbstractOperationContext.java:767)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:644)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1329)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:400)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:222)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:756)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:750)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at java.security.AccessController.doPrivileged(Native Method) 
>> [rt.jar:1.8.0_131]
>>at 
>> org.jboss.as.controller.ModelControllerImpl$3$1.run(ModelControllerImpl.java:750)
>>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
>>at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>> [rt.jar:1.8.0_131]
>>at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
>> [rt.jar:1.8.0_131]
>>at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>  [rt.jar:1.8.0_131]
>>at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>  [rt.jar:1.8.0_131]
>>at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  [rt.jar:1.8.0_131]
>>at 
>> 

Re: [ovirt-users] Can't access ovirt-engine webpage

2017-06-05 Thread Yedidyah Bar David
On Mon, Jun 5, 2017 at 4:44 AM, Thomas Wakefield  wrote:
> After a reboot I can’t access the management ovirt-engine webpage anymore.
>
>
> server.log line that looks bad:
>
> 2017-06-04 21:35:55,652-04 ERROR 
> [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 
> 2) WFLYCTL0190: Step handler 
> org.jboss.as.server.deployment.DeploymentHandlerUtil$5@61b686ed for operation 
> {"operation" => "undeploy","address" => [("deployment" => 
> "engine.ear")],"owner" => [("subsystem" => "deployment-scanner"),("scanner" 
> => "default")]} at address [("deployment" => "engine.ear")] failed handling 
> operation rollback -- java.lang.IllegalStateException: WFLYCTL0345: Timeout 
> after 5 seconds waiting for existing service service 
> jboss.deployment.unit."engine.ear".contents to be removed so a new instance 
> can be installed.: java.lang.IllegalStateException: WFLYCTL0345: Timeout 
> after 5 seconds waiting for existing service service 
> jboss.deployment.unit."engine.ear".contents to be removed so a new instance 
> can be installed.
> at 
> org.jboss.as.controller.OperationContextImpl$ContextServiceBuilder.install(OperationContextImpl.java:2107)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.server.deployment.PathContentServitor.addService(PathContentServitor.java:50)
>  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.server.deployment.DeploymentHandlerUtil.doDeploy(DeploymentHandlerUtil.java:165)
>  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.server.deployment.DeploymentHandlerUtil$5$1.handleResult(DeploymentHandlerUtil.java:333)
>  [wildfly-server-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext$Step.invokeResultHandler(AbstractOperationContext.java:1384)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1366)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:1328)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:1311)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext$Step.access$300(AbstractOperationContext.java:1185)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext.executeResultHandlerPhase(AbstractOperationContext.java:767)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:644)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1329)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:400)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:222)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:756)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> org.jboss.as.controller.ModelControllerImpl$3$1$1.run(ModelControllerImpl.java:750)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at java.security.AccessController.doPrivileged(Native Method) 
> [rt.jar:1.8.0_131]
> at 
> org.jboss.as.controller.ModelControllerImpl$3$1.run(ModelControllerImpl.java:750)
>  [wildfly-controller-2.2.0.Final.jar:2.2.0.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [rt.jar:1.8.0_131]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [rt.jar:1.8.0_131]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [rt.jar:1.8.0_131]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [rt.jar:1.8.0_131]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [rt.jar:1.8.0_131]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [rt.jar:1.8.0_131]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> at