Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
Hi,

Thanks

Br.
Yafeng

On Tue, 20 Aug 2019 at 11:14, Eliza  wrote:

> Hi
>
> on 2019/8/20 11:00, fengyd wrote:
> > I think you're right.
>
> I am not so sure about it. But I think ceph client always wants to know
> the cluster's topology, so it needs to communicate with cluster all the
> time. The big difference for ceph to other distributed storage is
> clients participate into cluster's calculations.
>
> I think you know Chinese? just googled out this one:
> http://blog.dnsbed.com/?p=1685
>
> regards.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread Eliza

Hi

on 2019/8/20 11:00, fengyd wrote:

I think you're right.


I am not so sure about it. But I think ceph client always wants to know 
the cluster's topology, so it needs to communicate with cluster all the 
time. The big difference for ceph to other distributed storage is 
clients participate into cluster's calculations.


I think you know Chinese? just googled out this one:
http://blog.dnsbed.com/?p=1685

regards.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
Hi,

I think you're right.

thanks.

Br.
Yafeng

On Tue, 20 Aug 2019 at 10:59, Eliza  wrote:

>
>
> on 2019/8/20 10:57, fengyd wrote:
> > Long connections means new tcp connection which connect the same targets
> > is reestablished after timeout?
>
> yes, once timeouted, then reconnecting.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread Eliza




on 2019/8/20 10:57, fengyd wrote:
Long connections means new tcp connection which connect the same targets 
is reestablished after timeout?


yes, once timeouted, then reconnecting.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
Hi,

Long connections means new tcp connection which connect the same targets is
reestablished after timeout?


On Tue, 20 Aug 2019 at 10:37, Eliza  wrote:

> Hi
>
> on 2019/8/20 10:30, fengyd wrote:
> > If the creation timestamp of  the FD is not changed, but the socket
> > information to which the FD was linked is changed, it means new tcp
> > connection is established.
> > If there's no reading/wring ongoing,  why new tcp connection is still
> > established and the FD count is stable?
>
> Though I am just a ceph user not the expert, but I think each block
> device as the client who is involved into CRUSH algorithm for data
> rebalancing etc, so long connections between client and OSDs are kept.
>
> regards.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread Eliza

Hi

on 2019/8/20 10:30, fengyd wrote:
If the creation timestamp of  the FD is not changed, but the socket 
information to which the FD was linked is changed, it means new tcp 
connection is established.
If there's no reading/wring ongoing,  why new tcp connection is still 
established and the FD count is stable?


Though I am just a ceph user not the expert, but I think each block 
device as the client who is involved into CRUSH algorithm for data 
rebalancing etc, so long connections between client and OSDs are kept.


regards.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
Hi,

1. Create a VM and a volume, and attach the volume to VM.
   Check the FD count with lsof and the FD count is increased by 10.
2.  Fill the volume with dd command on the VM
   Check the FD count with lsof and the FD count is increased dramatically
and stable after the FD count is increased by 48(48 is the exact number of
OSDs)

If the creation timestamp of  the FD is not changed, but the socket
information to which the FD was linked is changed, it means new tcp
connection is established.
If there's no reading/wring ongoing,  why new tcp connection is still
established and the FD count is stable?

Br.
Yafeng

On Tue, 20 Aug 2019 at 10:07, Eliza  wrote:

>
> on 2019/8/20 9:54, fengyd wrote:
> > I checked the FD information with the command "ls -l /proc/25977/fd"  //
> > here 25977 is Qemu process.
> > I found that the creation timestamp of  the FD was not changed, but the
> > socket information to which the FD was linked was changed.
> > So, I guess the FD is reused when establishing new tcp connection.
>
> I alomost got a lot of tcp connections from host mounted with block
> devices to ceph's backend.
>
> regards.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread Eliza


on 2019/8/20 9:54, fengyd wrote:
I checked the FD information with the command "ls -l /proc/25977/fd"  // 
here 25977 is Qemu process.
I found that the creation timestamp of  the FD was not changed, but the 
socket information to which the FD was linked was changed.

So, I guess the FD is reused when establishing new tcp connection.


I alomost got a lot of tcp connections from host mounted with block 
devices to ceph's backend.


regards.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
Hi,

I checked the FD information with the command "ls -l /proc/25977/fd"  //
here 25977 is Qemu process.
I found that the creation timestamp of  the FD was not changed, but the
socket information to which the FD was linked was changed.
So, I guess the FD is reused when establishing new tcp connection.

[image: image.png]

On Tue, 20 Aug 2019 at 04:11, Eliza  wrote:

> Hi,
>
> on 2019/8/19 16:10, fengyd wrote:
> > I think when reading/writing to volume/image, tcp connection needs to be
> > established which needs FD, then the FD count may increase.
> > But after reading/writing, why the FD count doesn't descrease?
>
> The tcp may be long connections.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
I collected the lsof at different time and found that:
The total number of open FD is stable at a fixed value, and some of tcp
connection are changed.


On Mon, 19 Aug 2019 at 16:42, fengyd  wrote:

> -how long do you monitor after r/w finish?
> More than 900 seconds.
>
> I executed the following command last Saturday and today, the output was
> same.
> sudo lsof -p 5509 | wc -l
>
> And the result from /proc:
> ls -ltr /proc/5509/fd | grep socket | grep "Aug 13" | wc -l
> 134
>  sudo ls -ltr /proc/5509/fd | grep socket | grep "Aug 19" | wc -l
> 0
>
> In which configuration file can I find ms_connection_idle_timeout?
>
> On Mon, 19 Aug 2019 at 16:26, huang jun  wrote:
>
>> how long do you monitor after r/w finish?
>> there is a configure item named 'ms_connection_idle_timeout' which
>> default value is 900
>>
>> fengyd  于2019年8月19日周一 下午4:10写道:
>> >
>> > Hi,
>> >
>> > I have a question about tcp connection.
>> > In the test environment, openstack uses ceph RBD as backend storage.
>> > I created a VM and attache a volume/image to the VM.
>> > I monitored how many fd was used by Qemu process.
>> > I used the command dd to fill the whole volume/image.
>> > I found that the FD count was increased, and stable at a fixed value
>> after some time.
>> >
>> > I think when reading/writing to volume/image, tcp connection needs to
>> be established which needs FD, then the FD count may increase.
>> > But after reading/writing, why the FD count doesn't descrease?
>> >
>> > Thanks in advance.
>> > BR.
>> > Yafeng
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread fengyd
-how long do you monitor after r/w finish?
More than 900 seconds.

I executed the following command last Saturday and today, the output was
same.
sudo lsof -p 5509 | wc -l

And the result from /proc:
ls -ltr /proc/5509/fd | grep socket | grep "Aug 13" | wc -l
134
 sudo ls -ltr /proc/5509/fd | grep socket | grep "Aug 19" | wc -l
0

In which configuration file can I find ms_connection_idle_timeout?

On Mon, 19 Aug 2019 at 16:26, huang jun  wrote:

> how long do you monitor after r/w finish?
> there is a configure item named 'ms_connection_idle_timeout' which
> default value is 900
>
> fengyd  于2019年8月19日周一 下午4:10写道:
> >
> > Hi,
> >
> > I have a question about tcp connection.
> > In the test environment, openstack uses ceph RBD as backend storage.
> > I created a VM and attache a volume/image to the VM.
> > I monitored how many fd was used by Qemu process.
> > I used the command dd to fill the whole volume/image.
> > I found that the FD count was increased, and stable at a fixed value
> after some time.
> >
> > I think when reading/writing to volume/image, tcp connection needs to be
> established which needs FD, then the FD count may increase.
> > But after reading/writing, why the FD count doesn't descrease?
> >
> > Thanks in advance.
> > BR.
> > Yafeng
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread huang jun
how long do you monitor after r/w finish?
there is a configure item named 'ms_connection_idle_timeout' which
default value is 900

fengyd  于2019年8月19日周一 下午4:10写道:
>
> Hi,
>
> I have a question about tcp connection.
> In the test environment, openstack uses ceph RBD as backend storage.
> I created a VM and attache a volume/image to the VM.
> I monitored how many fd was used by Qemu process.
> I used the command dd to fill the whole volume/image.
> I found that the FD count was increased, and stable at a fixed value after 
> some time.
>
> I think when reading/writing to volume/image, tcp connection needs to be 
> established which needs FD, then the FD count may increase.
> But after reading/writing, why the FD count doesn't descrease?
>
> Thanks in advance.
> BR.
> Yafeng
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How RBD tcp connection works

2019-08-19 Thread Eliza

Hi,

on 2019/8/19 16:10, fengyd wrote:
I think when reading/writing to volume/image, tcp connection needs to be 
established which needs FD, then the FD count may increase.

But after reading/writing, why the FD count doesn't descrease?


The tcp may be long connections.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com