Re: [ovirt-users] OVIRT_ENGINE_DWH org.postgresql.util.PSQLException:ERROR: more than one row returned by a subquery used as an expression

2017-08-27 Thread Yedidyah Bar David
On Fri, Aug 25, 2017 at 10:47 PM, Charles Gruener  wrote:
> Thank you so much for the detailed response!

Glad it helped :-)

>
>> You can try looking at the dwh_history_timekeeping table in the engine
>> (not dwh) database:
>>
>> su - postgres -c 'psql engine -c "select * from dwh_history_timekeeping;"'
>>
>> Most likely you'll find there more than one line with var_name 'lastSync’.
>
> And that I most certainly did.  There were double of each var_name.  I simply 
> deleted one of each of the pairs that made the most sense to delete, reran 
> engine-setup and all appears to be working now!
>
>> How this happened is quite interesting/useful to know, because it
>> should not normally happen, and is most likely a bug. If you can
>> reproduce this, please file a bug with relevant details. Thanks!
>
> I’m pretty sure this was a self-inflicted issue as a while back when things 
> broke, we actually had two oVirt heads running but we didn’t catch it for a 
> while.  Basically, we migrated from a VM running the head (on a separate VM 
> solution) to a hardware solution.

How? Using engine-backup? Some kind of duplication/imaging?

>  Someone ended up turning the VM back on and it started wreaking havoc on our 
> installation.

Ouch.

This is one of my bad dreams re backup/restore/migration.

We try to emphasize in various guides that you must stop and disable
the engine service on the old machine. If you can think of anything
that could have further helped in your own situation/flow/case, do not
hesitate to ping us! Saying "This was obviously our own fault, the
software was just fine" is helpful only to some extent. That said, I
did not hear so far about exactly-same cases, although this does not
mean there aren't any.

> This was likely a leftover from that condition.

Can you think how, exactly?

Can't tell exactly from your emails, but it seems you had engine+dwh
on the same machine.
Did you have DBs on a separate machine (which is not the default)? If
so, it makes sense.
The two machines' processes' both updated the same DBs.
But if you did use local DBs, and accessed them with host 'localhost'
(which is the default), the above should not have happened. Each
machine would then write to its own DBs.
This still quite bad - because you are then having two engines talking
to the same hosts - but in a different way.

Also: if I were you, I'd not keep trusting this system. If it works,
fine. It might break in the future - the above is definitely not part
of the design, not tested, not supported, etc. If at all possible,
perhaps consider reinstalling from scratch (not engine-backup
backup/restore). You can import the existing storage domains, if they
are not damaged as well. Can't even tell you how to test this. If the
individual VMs' disks seem ok, you might backup/restore these.

>  If it happens to return, I’ll be sure to file a bug.

Very well.

>
> One last question: Data for the Storage section of the Global Utilization 
> part of the dashboard is empty.  We are using Ceph via Cinder for our 
> storage.  Is that the issue?

I really have no idea, but it sounds reasonable. If you do not find an
existing open bug/RFE, please open one. Or start a new thread on this
list with a suitable subject header.

>
> Side note: we are now being bitten by this bug - 
> https://bugzilla.redhat.com/show_bug.cgi?id=1465825
>
> Thanks again for the assistance.
>
> Charles
>
>

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 testing backup and restore Self-hosted Engine

2017-08-27 Thread Yedidyah Bar David
On Sat, Aug 26, 2017 at 1:56 AM, wodel youchi 
wrote:

> Hi again,
>
> I found this article https://keithtenzer.com/2017/05/02/rhev-4-1-lab-
> installation-and-configuration-guide/
> I used the last section to delete the old hosted-engine storage, and it
> worked, the minute I deleted the old hosted-storage the system imported the
> new one and then imported the new VM-Manager into the Web admin portal.
>

In 4.1, engine-backup has new options during restore:
'--he-remove-storage-vm' and '--he-remove-hosts'. Check '--help'. Sadly, we
do not have enough documentation for this. This is worked on, I hope to
have updates soon.

Best,



>
> Regards.
>
> 2017-08-25 23:15 GMT+01:00 wodel youchi :
>
>> Hi again,
>>
>> I redid the test again, I re-read the Self-Hosted Engine documentation,
>> there is a link to a RedHat article https://access.redhat.com/solu
>> tions/1517683 which talks about how to remove the dead hostedEngine VM
>> from the web admin portal.
>>
>> But the article does not talk about how to remove the old hosted engine
>> storage, and this is what causes the problem.
>>
>> This storage is still pointing to the old iscsi disk used by the dead
>> Manager, it's is down, but the new manager cannot detach it, saying that
>> the storage domain doesn't exist, which is right, but how to force the
>> Manager to delete it? I have no idea, I tried to remove it with REST API,
>> without luck.
>>
>> I tried to import the new hosted storage, but the system said: the
>> storage name is already in use. So I am stuck.
>>
>> any idea? do I have to delete it from the database? if yes how?
>>
>> Regards.
>>
>> 2017-08-25 20:07 GMT+01:00 wodel youchi :
>>
>>> Hi,
>>>
>>> I was able to remove the hostedEngine VM, but I didn't succeed to remove
>>> the old hostedEngine Storage domain.
>>> I tried several time to remove it, but I couldn't, the VM engine goes in
>>> pause mode. All I could do is to detach the hostedEngine from the
>>> datacenter. I then put all the other data domains in maintenance mode, the
>>> I reactivated my master data domain hoping that it will import the new
>>> hostedEngine domain, but without luck.
>>>
>>> It seems like there is something missing in this procedure.
>>>
>>> Regards
>>>
>>> 2017-08-25 9:28 GMT+01:00 Alan Griffiths :
>>>
 As I recall (a few weeks ago now) it was after restore, once the host
 had been registered in the Manager. However, I was testing on 4.0, so maybe
 the behaviour is slightly different in 4.1.

 Can you see anything in the Engine or vdsm logs as to why it won't
 remove the storage? Perhaps try removing the stale HostedEngine VM ?

 On 25 August 2017 at 09:14, wodel youchi 
 wrote:

> Hi and thanks,
>
> But when to remove the hosted_engine storage ? During the restore
> procedure or after ? Because after I couldn't do it, the manager refused 
> to
> put that storage in maintenance mode.
>
> Regards
>
> Le 25 août 2017 08:49, "Alan Griffiths"  a
> écrit :
>
>> As I recall from my testing. If you remove the old hosted_storage
>> domain then the new one should get automatically imported.
>>
>> On 24 August 2017 at 23:03, wodel youchi 
>> wrote:
>>
>>> Hi,
>>>
>>> I am testing the backup and restore procedure of the Self-hosted
>>> Engine, and I have a problem.
>>>
>>> This haw I did the test.
>>>
>>> I have two hypervisors hosted-engine. I am used iSCSI disk for the
>>> VM engine.
>>>
>>> I followed the procedure described in the Self-hosted Engine
>>> document to execute the backup, I put the first host in maintenance 
>>> mode,
>>> the I create the backup and save it elsewhere.
>>>
>>> Then I've create a new iscsi disk, I reinstalled the first host with
>>> the save IP/hostname, then I followed the restore procedure to get the
>>> Manager up and running again.
>>> - hosted-engine --deploy
>>> - do not execute engine-setup, restore backup first
>>> - execute engine-setup
>>> - remove the host from the manager
>>> - synchronize the restored manger with the host
>>> - finalize deployment.
>>>
>>> all went well till this point, but I have a problem with the
>>> VM-engine, it is shown as down in the admin portal. the ovirt-ha-agent
>>> cannot retrieve the VM config from the shared storage.
>>>
>>> I think the problem, is that the hosted-engine storage domain is
>>> still pointing to the old disk of the old manager and not the new one. I
>>> don't know where is this information is stored, in the DB or in the
>>> Manager's config files, but when I click Manager hosted-engine domain, I
>>> can see the old LUN grayed and the new one (which is used by the 
>>> restored
>>> Manager) is not grayed.
>>>
>>> How can I fix this?
>>>
>>> Regards.
>>>
>>>
>>> 

Re: [ovirt-users] Want to Contribute

2017-08-27 Thread Yaniv Kaul
On Sat, Aug 26, 2017 at 3:16 PM, Yan Naing Myint 
wrote:

> Hello,
>
> I am currently only Ambassador of Fedora Project in Yangon, Myanmar.
>
> I want to contribute oVirt by spreading about oVirt in my region.
>

Excellent - please do.


> I am also teaching about oVirt here in my region.
> How should I approach to become officially recognized something like
> "oVirt Myanmar Community" ?
>

We do not have such official recognition. I believe participating in
meetups and community events is the right approach.
Organizing such events is welcome and is even a better way to become known
for your contribution.

Lastly, posting content on the popular social networks is a great way to
spread the word.

Y.


> Best,
> --
> Yan Naing Myint
> CEO
> Server & Network Engineer
> Cyber Wings Co., Ltd
> http://cyberwings.asia
> 09799950510
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine setup with Gluster fail

2017-08-27 Thread Anzar Esmail Sainudeen
Dear Team Ovirt,

 

I am trying to deploy hosted engine setup with Gluster. Hosted engine setup
was failed. Total number of host is 3 server 

 

 

PLAY [gluster_servers]
*

 

TASK [Run a shell script]
**

fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}

fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}

fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error while
evaluating conditional (result.rc != 0): 'dict object' has no attribute
'rc'"}

to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry

 

PLAY RECAP
*

ovirtnode2.thumbaytechlabs.int : ok=0changed=0unreachable=0
failed=1   

ovirtnode3.thumbaytechlabs.int : ok=0changed=0unreachable=0
failed=1   

ovirtnode4.thumbaytechlabs.int : ok=0changed=0unreachable=0
failed=1   

 

 

Please note my finding.

 

1.Still I am doubt with bricks setup ares . because during the ovirt
node setup time automatically create partition and mount all space. Please
find below #fdisk -l output

2. 

[root@ovirtnode4 ~]# fdisk -l

 

WARNING: fdisk GPT support is currently new, and therefore in an
experimental phase. Use at your own discretion.

 

Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

 

 

# Start  EndSize  TypeName

1 2048   411647200M  EFI System  EFI System Partition

2   411648  2508799  1G  Microsoft basic 

 3  2508800855463935  406.7G  Linux LVM   

 

Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB,
378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var--log: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-home: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-tmp: 2147 MB, 2147483648 bytes, 4194304 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var--log--audit: 2147 MB, 2147483648 bytes, 4194304
sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logica

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-27 Thread Kasturi Narra
Hi,

   If i understand right gdeploy script is failing at [1]. There could be
two possible reasons why that would fail.

1) can you please check if the disks what would be used for brick creation
does not have lables or any partitions on them ?

2) can you please check if the path [1] exists. If it does not can you
please change the path of the script in gdeploy.conf file
to /usr/share/gdeploy/scripts/grafton-sanity-check.sh

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

Thanks
kasturi

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] **
> ***
>
>
>
> TASK [Run a shell script] **
> 
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry
>
>
>
> PLAY RECAP 
> *
>
> ovirtnode2.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
> ovirtnode3.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
> ovirtnode4.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
>
>
>
>
> Please note my finding.
>
>
>
> 1.Still I am doubt with bricks setup ares . because during the ovirt
> node setup time automatically create partition and mount all space. Please
> find below #fdisk –l output
>
> 2.
>
> [root@ovirtnode4 ~]# fdisk –l
>
>
>
> WARNING: fdisk GPT support is currently new, and therefore in an
> experimental phase. Use at your own discretion.
>
>
>
> Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk label type: gpt
>
>
>
>
>
> # Start  EndSize  TypeName
>
> 1 2048   411647200M  EFI System  EFI System Partition
>
> 2   411648  2508799  1G  Microsoft basic
>
>  3  2508800855463935  406.7G  Linux LVM
>
>
>
> Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB,
> 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
>