Yes im shure every mount point is duplicated.
If i look at /proc/1009/mounts 
I see all entrys of /etc/fstab
And after that again the entrys

In /etc/fstab and /proc/mount the entrys are not duplicated

umount /var/www/clients/client449/web1440/log
then the entry is gone
cat /proc/1009/mounts | grep web1440

and if i do:
mount /var/www/clients/client449/web1440/log
the lines are back
cat /proc/1009/mounts | grep web1440
/dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4
rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.user,gr
pjquota=aquota.group 0 0
/dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4
rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.user,gr
pjquota=aquota.group 0 0

So it looks like that the mount command runs twice... but how


If i look on the node
cat /proc/self/mountinfo  | grep ploop
178 51 182:518529 / /vz/root/1 rw,relatime shared:154 - ext4
/dev/ploop32408p1 rw,data=ordered,balloon_ino=12
55 51 182:347185 / /vz/root/2 rw,relatime shared:34 - ext4 /dev/ploop21699p1
rw,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=a
quota.group


And on the vps.. 578 lines all unique

Very very strange

-----Oorspronkelijk bericht-----
Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens Konstantin
Khorenko
Verzonden: woensdag 10 juni 2020 16:22
Aan: OpenVZ users <users@openvz.org>
Onderwerp: Re: [Users] openvz 7> centos 8 container

Hi Steffan,

On 06/10/2020 03:51 PM, mailingl...@tikklik.nl wrote:
> Just noticed your script is not completely correct Total number of 
> mounts: 28169 is not counting as it should be.

Yes, i've already written it in my previous mail, there is a mistake in the
script.
=================================================
 > On 06/09/2020 01:32 PM, Konstantin Khorenko wrote:
 > > Total number of mounts: 28169
 >
 > He-he, there is a mistake in the script - "Total number of mounts" prints
sum of pids.
 > But ok, the real sum of mounts is 10000+.
=================================================

> But  when looking at the mounts there is something  strange
>
> Every pidhas all mountpoint twice
>
> cat /proc/1009/mounts | grep web1440
> /dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4 
> rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.u
> ser,gr
> pjquota=aquota.group 0 0
> /dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4 
> rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.u
> ser,gr
> pjquota=aquota.group 0 0

Are you sure really *every* mount entry is duplicated?
If really *all* mounts are duplicated, then it's strange.
But i guess not all of them are duplicated.

 > /dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4 ...
This is a bindmount, so i think someone just did 2 bindmounts to the same
place, that's it.

May be you've started the software 2 times and first time it was not
gracefully shutdowned and did not unmount "old" mounts, i've no idea
honestly.

Or may be someone did on CT start something like "mount -o bind / /", and
after that every mount in a CT will be shown twice due to propagation.

Example:
[root@localhost ~]# mount -o bind / /

[root@localhost ~]# cat /proc/self/mountinfo  | grep ploop
153 62 182:503793 / / rw,relatime shared:41 master:38 - ext4
/dev/ploop31487p1 rw,data=ordered,balloon_ino=12
264 153 182:503793 / / rw,relatime shared:41 master:38 - ext4
/dev/ploop31487p1 rw,data=ordered,balloon_ino=12

[root@localhost ~]# mount -o bind /var/log /mnt

[root@localhost ~]# cat /proc/self/mountinfo  | grep ploop
153 62 182:503793 / / rw,relatime shared:41 master:38 - ext4
/dev/ploop31487p1 rw,data=ordered,balloon_ino=12
264 153 182:503793 / / rw,relatime shared:41 master:38 - ext4
/dev/ploop31487p1 rw,data=ordered,balloon_ino=12
269 153 182:503793 /var/log /mnt rw,relatime shared:41 master:38 - ext4
/dev/ploop31487p1 rw,data=ordered,balloon_ino=12
270 264 182:503793 /var/log /mnt rw,relatime shared:41 master:38 - ext4
/dev/ploop31487p1 rw,data=ordered,balloon_ino=12

Currently i do not see any sign the issue is virtualization related, most
probably if you make same setup on a Hardware Node, you'll get the same.

(If you think it's virtualization related and we have some bug which does
not trigger on vz6 but triggers on vz7 - just migrate old Containers from
vz6 to vz7 and see the number of mounts.)

--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team
>
> Steffan
>
>
> -----Oorspronkelijk bericht-----
> Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens 
> Konstantin Khorenko
> Verzonden: dinsdag 9 juni 2020 12:32
> Aan: OpenVZ users <users@openvz.org>
> Onderwerp: Re: [Users] openvz 7> centos 8 container
>
> On 06/09/2020 12:19 PM, mailingl...@tikklik.nl wrote:
>> Hello Konstantin,
>>
>>
>> 1: this is a centos 6 container
>>     lsns is not there
>
> i don't expect namespaces in centos 6 Containers, i think there is 
> only one, but you can verify it:
> # for i in /proc/[0-9]*/ns/mnt; do readlink $i; done | uniq
>
> Just check the number of lines in output.
>
> And how many mounts in CentOS 6 Container?
> (Assuming only 1 mount namespace, just cat /proc/mounts | wc -l)
>
>
>> 2:
>> PID: 1, # of mounts: 600,       cmdline: init-z
>> PID: 707,       # of mounts: 1199,      cmdline:
>> /usr/lib/systemd/systemd-udevd
>> PID: 724,       # of mounts: 1205,      cmdline:
>> /usr/sbin/NetworkManager--no-daemon
>> PID: 11063,     # of mounts: 1201,      cmdline:
> /usr/sbin/httpd-DFOREGROUND
>> PID: 10410,     # of mounts: 1203,      cmdline:
>> /usr/libexec/postfix/master-w
>> PID: 1118,      # of mounts: 1201,      cmdline:
>> /usr/libexec/mysqld--basedir=/usr
>> PID: 1029,      # of mounts: 1201,      cmdline: php-fpm: master process
>> (/etc/opt/remi/php73/php-fpm.conf)
>> PID: 1037,      # of mounts: 1201,      cmdline: php-fpm: master process
>> (/etc/opt/remi/php71/php-fpm.conf)
>> PID: 1039,      # of mounts: 1201,      cmdline: php-fpm: master process
>> (/etc/opt/remi/php56/php-fpm.conf)
>> PID: 1041,      # of mounts: 1202,      cmdline: php-fpm: master process
>> (/etc/php-fpm.conf)
>
>
>> Total number of mounts: 28169
>
> He-he, there is a mistake in the script - "Total number of mounts" 
> prints sum of pids. :) But ok, the real sum of mounts is 10000+.
>
> --
> Konstantin
>
>> -----Oorspronkelijk bericht-----
>> Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens 
>> Konstantin Khorenko
>> Verzonden: dinsdag 9 juni 2020 10:53
>> Aan: OpenVZ users <users@openvz.org>
>> Onderwerp: Re: [Users] openvz 7> centos 8 container
>>
>> On 06/09/2020 09:42 AM, mailingl...@tikklik.nl wrote:
>>> Is this on a openvz6 a different setting.
>>
>> That's strange, the mount limit presents in vz6 as well.
>>
>> At some point we faced the situation when some Container stop took 
>> enormous amount of time, we found out that there was a software 
>> inside
> which "leaked"
>> mounts, but this does not matter, it means any "bad guy" can create a 
>> lot of mounts and start/stop Containers affecting other Containers on 
>> the same node (global locks taken - namespace_sem, vfsmount_lock).
>>
>> Thus we've implemented the precaution limit for mounts.
>>
>> Can you check the total number of mounts on
>> 1) vz6 ("old" server running in centos7 Container?) and
>> 2) vz7 ("new" server running in centos8 Container?)
>>
>> # export total=0; for i in `lsns | grep mnt | awk -e '{print $4;}'`; 
>> do echo -en "PID: $i,\t# of mounts: "; echo -n `cat /proc/$i/mounts | 
>> wc -l`; echo -en ",\tcmdline: "; cat /proc/$i/cmdline; echo ""; 
>> total=$((total + $i)); done; echo "Total number of mounts: $total"
>>
>> Thank you.
>>
>> --
>> Konstantin
>>
>>> The old server is now running on a centos 7 vps
>>>
>>>
>>> -----Oorspronkelijk bericht-----
>>> Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens 
>>> Konstantin Khorenko
>>> Verzonden: maandag 8 juni 2020 23:07
>>> Aan: OpenVZ users <users@openvz.org>
>>> Onderwerp: Re: [Users] openvz 7> centos 8 container
>>>
>>> On 06/08/2020 09:15 PM, mailingl...@tikklik.nl wrote:
>>>> If 4096 is the default
>>>> Then i dont get it why this error is there
>>>>
>>>> Its 'only' 597
>>>>
>>>> mount | wc -l
>>>> 597
>>>
>>> Most probably you have mount namespaces with more mounts inside.
>>>
>>>>
>>>>
>>>> Best regards,
>>>>
>>>> Steffan
>>>> -----Oorspronkelijk bericht-----
>>>> Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens 
>>>> Konstantin Khorenko
>>>> Verzonden: maandag 8 juni 2020 17:45
>>>> Aan: OpenVZ users <users@openvz.org>
>>>> Onderwerp: Re: [Users] openvz 7> centos 8 container
>>>>
>>>> On 06/08/2020 03:31 PM, mailingl...@tikklik.nl wrote:
>>>>> I now see on my node:
>>>>>
>>>>> kernel: CT#402 reached the limit on mounts.
>>>>
>>>> You can increase the limit of mounts inside a Container via sysctl 
>>>> "fs.ve-mount-nr" (4096 by default).
>>>>
>>>> Warning: stopping of a Container with many mounts inside can take 
>>>> quite a long time.
>>>> Say, if you have 200000 of mounts in a Container, Container stop 
>>>> may take
>>>> ~10 minutes.
>>>>
>>>> --
>>>> Best regards,
>>>>
>>>> Konstantin Khorenko,
>>>> Virtuozzo Linux Kernel Team
>>>>
>>>>> So think that is the problem.
>>>>>
>>>>> I see a old toppic onlin
>>>> https://forum.openvz.org/index.php?t=rview&th=12902&goto=52002
>>>>>
>>>>> Any idee if that is the solution that is needed today?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *Van:* users-boun...@openvz.org <users-boun...@openvz.org> *Namens
>>>> *mailingl...@tikklik.nl
>>>>> *Verzonden:* maandag 8 juni 2020 14:17
>>>>> *Aan:* 'OpenVZ users' <users@openvz.org>
>>>>> *Onderwerp:* [Users] openvz 7> centos 8 container
>>>>>
>>>>>
>>>>>
>>>>>     Hello,
>>>>>
>>>>>
>>>>>
>>>>>     If installed a centos 8 op-envz container
>>>>>
>>>>>     It was working, but after migration my data from an older 
>>>>> container im
>>>> keep getting errors like this:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     php71-php-fpm.service: Failed to set up mount namespacing:
>>>>> Cannot
>>>> allocate memory
>>>>>
>>>>>     php71-php-fpm.service: Failed at step NAMESPACE spawning
>>>> /opt/remi/php71/root/usr/sbin/php-fpm: Cannot allocate memory
>>>>>
>>>>>
>>>>>
>>>>>     php73-php-fpm.service: Failed to set up mount namespacing:
>>>>> Cannot
>>>> allocate memory
>>>>>
>>>>>     php73-php-fpm.service: Failed at step NAMESPACE spawning
>>>> /opt/remi/php73/root/usr/sbin/php-fpm: Cannot allocate memory
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     httpd.service: Failed to set up mount namespacing: Cannot 
>>>>> allocate
>>>> memory
>>>>>
>>>>>     httpd.service: Failed at step NAMESPACE spawning /usr/sbin/httpd:
>>>> Cannot allocate memory
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     cat /proc/user_beancounters
>>>>>
>>>>>     Version: 2.5
>>>>>
>>>>>     resource                     held              maxheld
>>>> barrier                limit              failcnt
>>>>>
>>>>>     kmemsize                 92078080            121937920
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     lockedpages                     0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     privvmpages                 52155                75857
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     shmpages                      659                 2636
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     dummy                           0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     numproc                        39                   39
>>>> 4194304              4194304                    0
>>>>>
>>>>>     physpages                   97697               111964
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     vmguarpages                     0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     oomguarpages                97697               111964
>>>> 0                    0                    0
>>>>>
>>>>>     numtcpsock                      0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     numflock                        2                    5
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     numpty                          0                    1
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     numsiginfo                      0                   57
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     tcpsndbuf                       0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     tcprcvbuf                       0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     othersockbuf                    0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     dgramrcvbuf                     0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     numothersock                    0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     dcachesize               51408896             72798208
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     numfile                       711                  995
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     dummy                           0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     dummy                           0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>     dummy                           0                    0
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>         numiptent                       8                   16
>>>> 9223372036854775807  9223372036854775807                    0
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     uname -r      3.10.0-1062.12.1.vz7.131.10
>>>>>
>>>>>
>>>>>
>>>>>     any idees what went wrong and how to repair?
>>>>>
>>>>>
>>>>>
>>>>>     Thanxs
>>>>>
>>>>>
>>>>>     Steffan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users@openvz.org
>>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users@openvz.org
>>>> https://lists.openvz.org/mailman/listinfo/users
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users@openvz.org
>>>> https://lists.openvz.org/mailman/listinfo/users
>>>> .
>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users@openvz.org
>>> https://lists.openvz.org/mailman/listinfo/users
>>> .
>>>
>> _______________________________________________
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>>
>> _______________________________________________
>> Users mailing list
>> Users@openvz.org
>> https://lists.openvz.org/mailman/listinfo/users
>> .
>>
> _______________________________________________
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
> _______________________________________________
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
> .
>
_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users

Reply via email to