I collected the lsof at different time and found that:
The total number of open FD is stable at a fixed value, and some of tcp
connection are changed.


On Mon, 19 Aug 2019 at 16:42, fengyd <fengy...@gmail.com> wrote:

> -how long do you monitor after r/w finish?
> More than 900 seconds.
>
> I executed the following command last Saturday and today, the output was
> same.
> sudo lsof -p 5509 | wc -l
>
> And the result from /proc:
> ls -ltr /proc/5509/fd | grep socket | grep "Aug 13" | wc -l
> 134
>  sudo ls -ltr /proc/5509/fd | grep socket | grep "Aug 19" | wc -l
> 0
>
> In which configuration file can I find ms_connection_idle_timeout?
>
> On Mon, 19 Aug 2019 at 16:26, huang jun <hjwsm1...@gmail.com> wrote:
>
>> how long do you monitor after r/w finish?
>> there is a configure item named 'ms_connection_idle_timeout' which
>> default value is 900
>>
>> fengyd <fengy...@gmail.com> 于2019年8月19日周一 下午4:10写道:
>> >
>> > Hi,
>> >
>> > I have a question about tcp connection.
>> > In the test environment, openstack uses ceph RBD as backend storage.
>> > I created a VM and attache a volume/image to the VM.
>> > I monitored how many fd was used by Qemu process.
>> > I used the command dd to fill the whole volume/image.
>> > I found that the FD count was increased, and stable at a fixed value
>> after some time.
>> >
>> > I think when reading/writing to volume/image, tcp connection needs to
>> be established which needs FD, then the FD count may increase.
>> > But after reading/writing, why the FD count doesn't descrease?
>> >
>> > Thanks in advance.
>> > BR.
>> > Yafeng
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to