[CentOS] CentOS 7, memory hungry (2.5GB) without user and heavy services running

2016-08-22 Thread correomm
Hello,

The last weekend, the VM with CentOS 7
(kernel-3.10.0-327.28.2.el7.x86_64) was running with 2.5GB of used memory,
but without users connected, and heavy services running. Three minutes
after reboot, the o.s. is running with 114MB used. ¿Why the 62% of memory
is used if the o.s. does not have an intensive load?


top sorted by memory:

top - 12:55:34 up 8 days, 21:41,  1 user,  load average: 0.16, 0.74, 0.49
Tasks:  90 total,   1 running,  89 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,
0.0 st
KiB Mem :  3882816 total,   310948 free,  2639892 used,   931976 buff/cache
KiB Swap:  7815164 total,  7815164 free,0 used.   825004 avail Mem

  PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
  579 root  20   0  323592  23168   6232 S  0.0  0.6   0:30.07 firewalld
 1110 root  20   0  553052  18264   5624 S  0.0  0.5   3:29.65 tuned
  720 polkitd   20   0  530364  10900   4736 S  0.0  0.3   0:12.84 polkitd
  587 root  20   0  250252   8060   4800 S  0.7  0.2  20:48.88 vmtoolsd
  692 root  20   0  436892   7988   6156 S  0.0  0.2   0:12.36
NetworkManager
1 root  20   0  273832   7128   3820 S  0.0  0.2   2:20.92 systemd
  577 root  20   0  210456   5488   3708 S  0.0  0.1   0:03.85 abrtd
26833 root  20   0  143008   5436   4148 S  0.0  0.1   0:13.47 sshd
  487 root  20   0   46424   5112   2652 S  0.0  0.1   0:02.26
systemd-udevd
  592 root  20   0  207976   4480   3124 S  0.0  0.1   0:02.31
abrt-watch-log
 1806 postfix   20   0   93512   4088   3028 S  0.0  0.1   0:02.80 qmgr
  470 root  20   0  129132   3956   2456 S  0.0  0.1   0:00.02 lvmetad
  580 root  20   0  445712   3888   3156 S  0.0  0.1   1:00.74 rsyslogd
26950 postfix   20   0   93328   3848   2840 S  0.0  0.1   0:00.11 pickup
 1112 root  20   0   82560   3544   2688 S  0.0  0.1   1:02.99 sshd
26837 root  20   0  116160   2976   1732 S  0.0  0.1   0:01.94 bash
  719 root  20   0   53064   2644   2052 S  0.0  0.1   0:00.25
wpa_supplicant
26783 root  20   0  178108   2448   1520 S  0.0  0.1   0:00.07 crond
  578 root  20   0  130060   2184   1672 S  0.0  0.1   0:00.66 smartd
 1801 root  20   0   93224   2168   1116 S  0.0  0.1   0:11.25 master
27041 root  20   0  157688   2128   1448 R  1.4  0.1   0:00.51 top
  581 dbus  20   0   37112   2116   1392 S  0.0  0.1   0:28.27
dbus-daemon
26968 root  20   0   36820   2076   1812 S  0.0  0.1   0:00.69
systemd-journal
  608 chrony20   0  115844   1860   1448 S  0.0  0.0   0:14.59 chronyd
  586 root  20   0   26448   1700   1348 S  0.0  0.0   0:46.24
systemd-logind
  615 root  20   0  126336   1680   1028 S  0.0  0.0   0:21.39 crond
  555 root  16  -4  116724   1604   1208 S  0.0  0.0   0:33.97 auditd
26787 root  20   0  113120   1404   1220 S  0.0  0.0   0:00.05
freshclam-sleep
  620 root  20   0   25964952752 S  0.0  0.0   0:00.62 atd
18160 root  20   0  110036816692 S  0.0  0.0   0:00.18 agetty
  601 libstor+  20   08532780632 S  0.0  0.0   0:10.61 lsmd
26792 root  20   0  107896620528 S  0.0  0.0   0:00.00 sleep
2 root  20   0   0  0  0 S  0.0  0.0   0:00.24 kthreadd
3 root  20   0   0  0  0 S  0.0  0.0   0:06.40
ksoftirqd/0


# netstat -nao
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address
State   Timer
tcp0  0 127.0.0.1:10025 0.0.0.0:*
LISTEN  off (0.00/0/0)
tcp0  0 0.0.0.0:22  0.0.0.0:*
LISTEN  off (0.00/0/0)
tcp0  0 127.0.0.1:250.0.0.0:*
LISTEN  off (0.00/0/0)
tcp0 64 ip.ad.dr.ess:22 my.ip.by.myisp:50470
ESTABLISHED on (0.26/0/0)
tcp6   0  0 :::22   :::*
LISTEN  off (0.00/0/0)
tcp6   0  0 ::1:25  :::*
LISTEN  off (0.00/0/0)
udp0  0 127.0.0.1:323   0.0.0.0:*
off (0.00/0/0)
udp6   0  0 ::1:323
:::*off (0.00/0/0)
raw6   0  0 :::58   :::*
7   off (0.00/0/0)
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags   Type   State I-Node   Path
unix  2  [ ACC ] STREAM LISTENING 9979
/run/lvm/lvmpolld.socket
unix  2  [ ACC ] STREAM LISTENING 9988
/run/lvm/lvmetad.socket
unix  2  [ ] DGRAM6664
/run/systemd/notify
unix  2  [ ACC ] STREAM LISTENING 17787private/tlsmgr
unix  2  [ ACC ] STREAM LISTENING 17790private/rewrite
unix  2  [ ACC ] STREAM LISTENING 17793private/bounce
unix  2  [ ACC ] STREAM LISTENING 6676
/run/systemd/journal/stdout
unix  2  [ ACC ] STREAM LISTENING 17796private/defer
unix  2  [ ACC ] STREAM 

[CentOS] dm-0: WRITE SAME failed. Manually zeroing

2016-08-22 Thread correomm
Hello

In a VM with CentOS 7, kernel 3.10.0-327.28.3.el7.x86_64, the console root
login shows
"[72.776182] dm-0: WRITE SAME failed. Manually zeroing".

What is wrong?

Regards,
MM
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]

2016-08-18 Thread correomm
No, I don't use snapshots.

It is a Dell 2 TB Enterprise 3.5" SATA Hard Drive.

The disk activity of the host is normal to low. Few VM's.

On Thu, Aug 18, 2016 at 2:32 PM, JJB <j...@internetguy.net> wrote:

>
> 2016-08-18 12:39 GMT-04:00 correomm <corre...@gmail.com>:
>>
>> This bug is reported only on the VM's with CentOS 7 running on on VMware
>>> ESXi 5.1.
>>> The vSphere performance graph shows high CPU consume and disk activity
>>> only
>>> on VM's with CentOS 7. Sometimes I can not connect remotely with ssh
>>> (timeout error).
>>>
>>> I'm also seeing those errors in several servers, running under 5.5.
>> Currently investigating if this
>> <https://kb.vmware.com/selfservice/microsites/search.do?
>> language=en_US=displayKC=1009996>
>> has anything to do (the resource overcommit bit).
>>
>
> Does this happen (only) while taking or consolidating snapshots? The VM is
> suspended during these operations and the OS isn't too crazy about it,
> especially if you have slow storage.
>
> Jack
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]

2016-08-18 Thread correomm
Yes, I tried it, but does not exists:

vmguest # cat /proc/sys/kernel/softlockup_thresh
cat: /proc/sys/kernel/softlockup_thresh: No such file or directory

On Thu, Aug 18, 2016 at 2:06 PM, Carlos A. Carnero Delgado <
carloscarn...@gmail.com> wrote:

> 2016-08-18 12:39 GMT-04:00 correomm <corre...@gmail.com>:
>
> > This bug is reported only on the VM's with CentOS 7 running on on VMware
> > ESXi 5.1.
> > The vSphere performance graph shows high CPU consume and disk activity
> only
> > on VM's with CentOS 7. Sometimes I can not connect remotely with ssh
> > (timeout error).
> >
>
> I'm also seeing those errors in several servers, running under 5.5.
> Currently investigating if this
> <https://kb.vmware.com/selfservice/microsites/search.
> do?language=en_US=displayKC=1009996>
> has anything to do (the resource overcommit bit).
>
> HTH,
> Carlos.
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]

2016-08-18 Thread correomm
This bug is reported only on the VM's with CentOS 7 running on on VMware
ESXi 5.1.
The vSphere performance graph shows high CPU consume and disk activity only
on VM's with CentOS 7. Sometimes I can not connect remotely with ssh
(timeout error).


The details of last issues was reported to retrace.fedoraproject.org.

¿Do you have a hint?


[root@vmguest ~]# abrt-cli list
id c52b463b15cfa94af7a96f237e5f525332750dd3
reason: systemd-journald killed by SIGABRT
time:   Tue 16 Aug 2016 03:10:52 PM CLST
cmdline:/usr/lib/systemd/systemd-journald
package:systemd-219-19.el7_2.12
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/ccpp-2016-08-16-15:10:52-458
Reported:
https://retrace.fedoraproject.org/faf/reports/bthash/d5f5d4f75b200eeab2f83c8340a37c5d507e29a3

id e6955aa621a0d296d4cdd05421523885e85a179b
reason: BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
time:   Tue 09 Aug 2016 04:46:33 PM CLT
cmdline:BOOT_IMAGE=/vmlinuz-3.10.0-327.28.2.el7.x86_64
root=/dev/mapper/centos_vmguest-root ro crashkernel=auto
rd.lvm.lv=centos_vmguest/root
rd.lvm.lv=centos_vmguest/swap rhgb quiet LANG=en_US.UTF-8
package:kernel
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/oops-2016-08-09-16:46:33-3165-0
Reported:
https://retrace.fedoraproject.org/faf/reports/bthash/4e231b49f72864c3487d8985337f9d41ce43da28

id 402f22e7214ea6bddcb0db9a9315527be245f943
reason: systemd-logind killed by SIGABRT
time:   Wed 20 Jul 2016 06:10:55 AM CLT
cmdline:/usr/lib/systemd/systemd-logind
package:systemd-219-19.el7_2.9
uid:0 (root)
count:  3
Directory:  /var/spool/abrt/ccpp-2016-07-20-06:10:55-32283
Reported:
https://retrace.fedoraproject.org/faf/reports/bthash/307b26a77cc6d5005ce2fdf18ff010fe3dc94401

id 58a46f4a45699384ad74850f53e749c702ee7b0b
reason: systemd-journald killed by SIGABRT
time:   Tue 02 Aug 2016 05:44:50 PM CLT
cmdline:/usr/lib/systemd/systemd-journald
package:systemd-219-19.el7_2.11
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/ccpp-2016-08-02-17:44:50-454
Reported:
https://retrace.fedoraproject.org/faf/reports/bthash/1af5058d2b9650a1d91c676fe15feb5612afccb9

id f4a35ca85a046e74bf4a4382c9f9a5c8dd8be149
reason: BUG: soft lockup - CPU#0 stuck for 24s! [vmtoolsd:579]
time:   Tue 02 Aug 2016 06:45:29 AM CLT
cmdline:BOOT_IMAGE=/vmlinuz-3.10.0-327.22.2.el7.x86_64
root=/dev/mapper/centos_vmguest-root ro crashkernel=auto
rd.lvm.lv=centos_vmguest/root
rd.lvm.lv=centos_vmguest/swap rhgb quiet LANG=en_US.UTF-8
package:kernel
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/oops-2016-08-02-06:45:29-11859-0
Reported:
https://retrace.fedoraproject.org/faf/reports/bthash/87298dcaf6b7dea7a92b136e3332209f3fd7c4d2

id edaec629ccce62943e9bdb514fe6e319ab320669
reason: BUG: soft lockup - CPU#0 stuck for 27s! [khugepaged:51]
time:   Tue 26 Jul 2016 06:00:13 PM CLT
cmdline:BOOT_IMAGE=/vmlinuz-3.10.0-327.18.2.el7.x86_64
root=/dev/mapper/centos_vmguest-root ro crashkernel=auto
rd.lvm.lv=centos_vmguest/root
rd.lvm.lv=centos_vmguest/swap rhgb quiet LANG=en_US.UTF-8
package:kernel
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/oops-2016-07-26-18:00:08-641-4
Reported:   cannot be reported

id b707fd06199e2e1edcb105878a46e238a50746f3
reason: BUG: soft lockup - CPU#3 stuck for 23s!
[systemd-journal:422]
time:   Tue 26 Jul 2016 06:00:10 PM CLT
cmdline:BOOT_IMAGE=/vmlinuz-3.10.0-327.18.2.el7.x86_64
root=/dev/mapper/centos_vmguest-root ro crashkernel=auto
rd.lvm.lv=centos_vmguest/root
rd.lvm.lv=centos_vmguest/swap rhgb quiet LANG=en_US.UTF-8
package:kernel
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/oops-2016-07-26-18:00:08-641-1
Reported:   cannot be reported

id a39eead9c9f75c2dc94df0852cd24260f414b80b
reason: BUG: soft lockup - CPU#2 stuck for 22s! [swapper/2:0]
time:   Tue 26 Jul 2016 06:00:08 PM CLT
cmdline:BOOT_IMAGE=/vmlinuz-3.10.0-327.18.2.el7.x86_64
root=/dev/mapper/centos_vmguest-root ro crashkernel=auto
rd.lvm.lv=centos_vmguest/root
rd.lvm.lv=centos_vmguest/swap rhgb quiet LANG=en_US.UTF-8
package:kernel
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/oops-2016-07-26-18:00:08-641-0
Reported:   cannot be reported

id fe7e2542e93848e41d9d702af5c4d12b1d833b72
reason: systemd-logind killed by SIGABRT
time:   Wed 20 Jul 2016 08:20:29 AM CLT
cmdline:/usr/lib/systemd/systemd-logind
package:systemd-219-19.el7_2.9
uid:0 (root)
count:  1
Directory:  /var/spool/abrt/ccpp-2016-07-20-08:20:29-32660

id e58538a3aa1f01f384fe9bdd40a693f9d2f32889
reason: BUG: soft lockup - CPU#2 stuck for 24s! [kworker/2:0:31607]
time:   Mon