Hi,
Thanks
Br.
Yafeng
On Tue, 20 Aug 2019 at 11:14, Eliza wrote:
> Hi
>
> on 2019/8/20 11:00, fengyd wrote:
> > I think you're right.
>
> I am not so sure about it. But I think ceph client always wants to know
> the cluster's topology, so it needs to communicate with cluster all the
> time. Th
Hi
on 2019/8/20 11:00, fengyd wrote:
I think you're right.
I am not so sure about it. But I think ceph client always wants to know
the cluster's topology, so it needs to communicate with cluster all the
time. The big difference for ceph to other distributed storage is
clients participate in
Hi,
I think you're right.
thanks.
Br.
Yafeng
On Tue, 20 Aug 2019 at 10:59, Eliza wrote:
>
>
> on 2019/8/20 10:57, fengyd wrote:
> > Long connections means new tcp connection which connect the same targets
> > is reestablished after timeout?
>
> yes, once timeouted, then reconnecting.
>
__
on 2019/8/20 10:57, fengyd wrote:
Long connections means new tcp connection which connect the same targets
is reestablished after timeout?
yes, once timeouted, then reconnecting.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
Hi,
Long connections means new tcp connection which connect the same targets is
reestablished after timeout?
On Tue, 20 Aug 2019 at 10:37, Eliza wrote:
> Hi
>
> on 2019/8/20 10:30, fengyd wrote:
> > If the creation timestamp of the FD is not changed, but the socket
> > information to which th
Hi
on 2019/8/20 10:30, fengyd wrote:
If the creation timestamp of the FD is not changed, but the socket
information to which the FD was linked is changed, it means new tcp
connection is established.
If there's no reading/wring ongoing, why new tcp connection is still
established and the FD c
Hi,
1. Create a VM and a volume, and attach the volume to VM.
Check the FD count with lsof and the FD count is increased by 10.
2. Fill the volume with dd command on the VM
Check the FD count with lsof and the FD count is increased dramatically
and stable after the FD count is increased by
on 2019/8/20 9:54, fengyd wrote:
I checked the FD information with the command "ls -l /proc/25977/fd" //
here 25977 is Qemu process.
I found that the creation timestamp of the FD was not changed, but the
socket information to which the FD was linked was changed.
So, I guess the FD is reused
Hi,
I checked the FD information with the command "ls -l /proc/25977/fd" //
here 25977 is Qemu process.
I found that the creation timestamp of the FD was not changed, but the
socket information to which the FD was linked was changed.
So, I guess the FD is reused when establishing new tcp connect
I collected the lsof at different time and found that:
The total number of open FD is stable at a fixed value, and some of tcp
connection are changed.
On Mon, 19 Aug 2019 at 16:42, fengyd wrote:
> -how long do you monitor after r/w finish?
> More than 900 seconds.
>
> I executed the following c
-how long do you monitor after r/w finish?
More than 900 seconds.
I executed the following command last Saturday and today, the output was
same.
sudo lsof -p 5509 | wc -l
And the result from /proc:
ls -ltr /proc/5509/fd | grep socket | grep "Aug 13" | wc -l
134
sudo ls -ltr /proc/5509/fd | grep
how long do you monitor after r/w finish?
there is a configure item named 'ms_connection_idle_timeout' which
default value is 900
fengyd 于2019年8月19日周一 下午4:10写道:
>
> Hi,
>
> I have a question about tcp connection.
> In the test environment, openstack uses ceph RBD as backend storage.
> I created a
Hi,
on 2019/8/19 16:10, fengyd wrote:
I think when reading/writing to volume/image, tcp connection needs to be
established which needs FD, then the FD count may increase.
But after reading/writing, why the FD count doesn't descrease?
The tcp may be long connections.
__
Hi,
I have a question about tcp connection.
In the test environment, openstack uses ceph RBD as backend storage.
I created a VM and attache a volume/image to the VM.
I monitored how many fd was used by Qemu process.
I used the command dd to fill the whole volume/image.
I found that the FD count wa
14 matches
Mail list logo