[ovirt-users] Re: oVirt 4.5.2 /var growing rapidly due to ovirt_engine_history db

2022-10-05 Thread sohail_akhter3
Hi Aviv

We are still observing this issue. dwh db is increasing very rapidly. So far we 
are unable to find what is causing so much increasing. These are top tables 
consuming disk space. 
I added entry in the root crontab to vacuum the db but it did not work. 
 
 public.host_interface_samples_history| 56 GB
 public.host_interface_hourly_history | 11 GB
 public.vm_disks_usage_samples_history| 8819 MB
 public.vm_interface_samples_history  | 3396 MB
 public.vm_samples_history| 2689 MB
 public.host_interface_configuration  | 2310 MB
 public.vm_disk_samples_history   | 1839 MB
 public.vm_disks_usage_hourly_history | 1210 MB
 public.vm_device_history | 655 MB
 public.vm_interface_hourly_history   | 536 MB
 public.vm_hourly_history | 428 MB
 public.statistics_vms_users_usage_hourly | 366 MB
 public.vm_disk_hourly_history| 330 MB
 public.host_samples_history  | 140 MB
 public.host_interface_daily_history  | 77 MB
 public.calendar  | 40 MB
 public.host_hourly_history   | 24 MB
 public.vm_disk_configuration | 19 MB
 public.vm_interface_configuration| 16 MB
 public.vm_configuration  | 14 MB
 public.vm_disks_usage_daily_history  | 11 MB
 public.vm_interface_daily_history| 4992 kB
 public.storage_domain_samples_history| 4312 kB
 public.vm_daily_history  | 4088 kB
 public.vm_disk_daily_history | 3048 kB
 public.statistics_vms_users_usage_daily  | 2816 kB
 public.cluster_configuration | 1248 kB
 public.host_configuration| 1136 kB
 public.storage_domain_hourly_history | 744 kB
 public.tag_relations_history | 352 kB

This the row count in host_interface_samples_history table.
ovirt_engine_history=# select count(*) from host_interface_samples_history;
   count   
---
 316633499
I had no choice except to truncate the table. But in next 3-4 hours count again 
increased too much
ovirt_engine_history=# select count(*) from host_interface_samples_history;
  count  
-
 8743168
(1 row)
So even if we move dwh to separate vm it will use the disk space in few days. I 
applied below recommendations also but it did not make any change.

# cat ovirt-engine-dwhd.conf
#
# These variables control the amount of memory used by the java
# virtual machine where the daemon runs:
#
DWH_HEAP_MIN=1g
DWH_HEAP_MAX=3g

# Recommendation as per oVirt Guide in case dwh and engine are on same machine
#https://www.ovirt.org/documentation/data_warehouse_guide/#Installing_and_Configuring_Dat#a_Warehouse_on_a_Separate_Machine_DWH_admin
DWH_TABLES_KEEP_HOURLY=780
DWH_TABLES_KEEP_DAILY=0

Is there anything else we can check further to find what is causing so much 
increase?
Please let me know if you need any further information.

Regards
Sohail




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J5HJNEL4BTD5HQVLP44MZR5FMB2GWY24/


[ovirt-users] Re: oVirt 4.5.2 /var growing rapidly due to ovirt_engine_history db

2022-09-08 Thread sohail_akhter3
Hi Aviv,

Thanks for your reply. We upgrade from 4.4.7 to 4.4.10 and then 4.5.2
We observed it started to increase since we upgraded to 4.5.2. I have to do dwh 
vacuuming every 2-3 days. Here for example.
--
omitted output.
[root@manager ~]# dwh-vacuum -f -v
SELECT pg_catalog.set_config('search_path', '', false);
vacuumdb: vacuuming database "ovirt_engine_history"
RESET search_path;
SELECT c.relname, ns.nspname FROM pg_catalog.pg_class c
 JOIN pg_catalog.pg_namespace ns ON c.relnamespace OPERATOR(pg_catalog.=) ns.oid
 LEFT JOIN pg_catalog.pg_class t ON c.reltoastrelid OPERATOR(pg_catalog.=) t.oid
 WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array['r', 'm'])
 ORDER BY c.relpages DESC;
SELECT pg_catalog.set_config('search_path', '', false);
VACUUM (FULL, VERBOSE) public.host_interface_samples_history;
INFO:  vacuuming "public.host_interface_samples_history"
INFO:  "host_interface_samples_history": found 94115 removable, 70244664 
nonremovable row versions in 1903718 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU: user: 36.72 s, system: 12.91 s, elapsed: 195.78 s.
VACUUM (FULL, VERBOSE) public.host_interface_hourly_history;
INFO:  vacuuming "public.host_interface_hourly_history"
INFO:  "host_interface_hourly_history": found 126645 removable, 40469226 
nonremovable row versions in 482262 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU: user: 20.71 s, system: 5.58 s, elapsed: 115.83 s.
VACUUM (FULL, VERBOSE) public.vm_disks_usage_samples_history;
INFO:  vacuuming "public.vm_disks_usage_samples_history"
INFO:  "vm_disks_usage_samples_history": found 2028 removable, 1672491 
nonremovable row versions in 307111 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU: user: 4.35 s, system: 3.77 s, elapsed: 51.81 s.
-
We have the plans to switch dwh and grafana to separate vm. Meanwhile we were 
curious to know the reason of this rapid increase. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NOD2S7Q27GV2F7P2WG5BIAKMLDQHZPJ5/


[ovirt-users] oVirt 4.5.2 /var growing rapidly due to ovirt_engine_history db

2022-08-29 Thread sohail_akhter3
Hi,

We have recently upgrade our oVirt environment to 4.5.2 version. Environment is 
based on hosted-engine. Since we have upgrade we noticed rapid incrase in /var 
partition in engine VM. It is increasing very rapidly. If we vacuum 
ovirt_engine_history db, /var size reduces but next day size will increase 
again upto 5-10%. We did db vacuuming couple of time but not sure we it is 
increasing so rapidly. 
Here is the partial output of vacuuming that was done on 26-08-22. Table 
"host_interface_hourly_history" had more entries to be removed. Rest of the 
table had not much entries. Previously table "host_interface_samples_history" 
had entries to be removed.
Any Idea what can be the reason for that?

# dwh-vacuum -f -v
SELECT pg_catalog.set_config('search_path', '', false);
vacuumdb: vacuuming database "ovirt_engine_history"
RESET search_path;
SELECT c.relname, ns.nspname FROM pg_catalog.pg_class c
 JOIN pg_catalog.pg_namespace ns ON c.relnamespace OPERATOR(pg_catalog.=) ns.oid
 LEFT JOIN pg_catalog.pg_class t ON c.reltoastrelid OPERATOR(pg_catalog.=) t.oid
 WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array['r', 'm'])
 ORDER BY c.relpages DESC;
SELECT pg_catalog.set_config('search_path', '', false);
VACUUM (FULL, VERBOSE) public.host_interface_samples_history;
INFO:  vacuuming "public.host_interface_samples_history"
INFO:  "host_interface_samples_history": found 3135 removable, 84609901 
nonremovable row versions in 1564960 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU: user: 41.88 s, system: 14.93 s, elapsed: 422.83 s.
VACUUM (FULL, VERBOSE) public.host_interface_hourly_history;
INFO:  vacuuming "public.host_interface_hourly_history"
INFO:  "host_interface_hourly_history": found 252422 removable, 39904650 
nonremovable row versions in 473269 pages

Please let me know if any further information is required.

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD2ROVZY2TOLZHSK4HWHEENRP7BQLRYI/


[ovirt-users] Does memory ballooning works if memory overcommit is disabled in the cluster

2022-03-03 Thread sohail_akhter3
Hi All,

We have one ovirt 4.4 environment running. In Cluster Optimization settings we 
have checked  "None - Disable memory overcommit" Option in Memory Optimization. 
But Memory Balloon check box option is enabled. My understanding is that 
Ballooning only works when memory overcommit is enabled. If it is true then 
this check box should be disabled when we are not overcommitting memory. Or 
Still memory ballooning works even if we disabled memory overcommit. According 
to below link ballooning works when memory overcommit is enabled.

https://lists.ovirt.org/pipermail/users/2017-October/084675.html

Please let me know if any further information is required.

Many thanks. 

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQS3IIZN47MMMWE3DJLTKLPHMKHNRKWC/


[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread sohail_akhter3
Hi Didi

downgrading qemu-kvm fixed the issue. What is the reason it is not working with 
version 6.1.0. Currently this is the version installed on my host

#yum info qemu-kvm
Last metadata expiration check: 2:03:58 ago on Thu 06 Jan 2022 03:18:40 PM UTC.
Installed Packages
Name : qemu-kvm
Epoch: 15
Version  : 6.0.0
Release  : 33.el8s
Architecture : x86_64
Size : 0.0  
Source   : qemu-kvm-6.0.0-33.el8s.src.rpm
Repository   : @System
From repo: ovirt-4.4-centos-stream-advanced-virtualization
Summary  : QEMU is a machine emulator and virtualizer
URL  : http://www.qemu.org/
License  : GPLv2 and GPLv2+ and CC-BY
Description  : qemu-kvm is an open source virtualizer that provides hardware
 : emulation for the KVM hypervisor. qemu-kvm acts as a virtual
 : machine monitor together with the KVM kernel modules, and 
emulates the
 : hardware for a full system such as a PC and its associated 
peripherals.

Available Packages
Name : qemu-kvm
Epoch: 15
Version  : 6.1.0
Release  : 5.module_el8.6.0+1040+0ae94936
Architecture : x86_64
Size : 156 k
Source   : qemu-kvm-6.1.0-5.module_el8.6.0+1040+0ae94936.src.rpm
Repository   : appstream
Summary  : QEMU is a machine emulator and virtualizer
URL  : http://www.qemu.org/
License  : GPLv2 and GPLv2+ and CC-BY
Description  : qemu-kvm is an open source virtualizer that provides hardware
 : emulation for the KVM hypervisor. qemu-kvm acts as a virtual
 : machine monitor together with the KVM kernel modules, and 
emulates the
 : hardware for a full system such as a PC and its associated 
peripherals.

Many thanks for your help

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P6S4CYOYQLD3M5YBGPPWB7Z7OK5BKVHE/


[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread sohail_akhter3
Hi Didi,

Apologies as this is my first post. I am referring the issue that is mentioned 
in the Red hat solution mentioned in this thread.
https://access.redhat.com/solutions/4462431
I am trying to deploy hosted engine VM. I tried via cockpit gui and through 
CLI. In both cased deployment fails with error message
From the below error message I can see VM is in powering down state and health 
status is bad

[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 180, "changed": true, 
"cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.162941", 
"end": "2022-01-06 00:43:07.060659", "rc": 0, "start": "2022-01-06 
00:43:06.897718", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": 
{\"host-id\": 1, \"host-ts\": 117459, \"score\": 3400, \"engine-status\": 
{\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Powering down\", 
\"reason\": \"failed liveliness check\"}, \"hostname\": 
\"seliics00123.ovirt4.fl.dselab.seli.gic.ericsson.se\", \"maintenance\": false, 
\"stopped\": false, \"crc32\": \"d889fd9b\", \"conf_on_shared_storage\": true, 
\"local_conf_timestamp\": 117459, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=117459 (Thu 
Jan  6 00:43:02 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=117459 
(Thu Jan  6 00:43:02 
2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Fri
 Jan  2 08:38:08 1970
 \\n\", \"live-data\": true}, \"global_maintenance\": false}", "stdout_lines": 
["{\"1\": {\"host-id\": 1, \"host-ts\": 117459, \"score\": 3400, 
\"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Powering 
down\", \"reason\": \"failed liveliness check\"}, \"hostname\": 
\"seliics00123.ovirt4.fl.dselab.seli.gic.ericsson.se\", \"maintenance\": false, 
\"stopped\": false, \"crc32\": \"d889fd9b\", \"conf_on_shared_storage\": true, 
\"local_conf_timestamp\": 117459, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=117459 (Thu 
Jan  6 00:43:02 2022)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=117459 
(Thu Jan  6 00:43:02 
2022)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\ntimeout=Fri
 Jan  2 08:38:08 1970\\n\", \"live-data\": true}, \"global_maintenance\": 
false}"]}
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Check VM status at virt level]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if engine VM is not 
running]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get target engine VM IP 
address]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get VDSM's target engine VM 
stats]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Convert stats to JSON format]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Get target engine VM IP 
address from VDSM stats]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is 
different from engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
IP address is  while the engine's he_fqdn 
manager-ovirt4.fl.dselab.seli.gic.ericsson.se resolves to 10.228.170.36. If you 
are using DHCP, check your DHCP reservation configuration"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing 
ansible-playbook

VM is running
[root@host]# virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf list
 Id   Name   State
--
 37   HostedEngine   running


Please let me know if you need any further output or log file. 

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXSUW6XMTQJM2J2SNOUWGZXTFKBFY2V2/


[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-05 Thread sohail_akhter3
Hi Guys

I am facing same issue in my recent deployment. There is nothing in the log 
that points towards the issue. I am deploying the VM on host with Centros 
Stream OS. Anybody recently faced this issue? Please let me know if need any 
further information.

Many Thanks.

Regards
Sohail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VN6WXMLVIZKW4J4SHSFUHLY4BH5V75X2/