Thank you for your reply. We will run the script and let you know the results
once the number of TCP connections raises up. We just restarted the sever
several days ago.
Sent from my iPhone
> On 23 May 2019, at 12:26 AM, Igor Podlesny wrote:
>
>> On Wed, 22 May 2019 at 20:32, Torben Hørup
ted the radosgw service to get rid of them.
>
>> On Mon, 20 May 2019 at 06:56, Li Wang wrote:
>> Dear ceph community members,
>>
>> We have a ceph cluster (mimic 13.2.4) with 7 nodes and 130+ OSDs. However,
>> we observed over 70 millions active TCP connections
Hi John,
Thanks for your reply. We have also restarted the server to get rid of it.
Hi All,
Does anybody know a better solution than restarting the server? Since we use
radosgw in production, we cannot afford service restart on a daily basis.
Regards,
Li Wang
> On 20 May 2019, at 2:48
on the
radosgw are connected to OSDs.
May I ask what might be the possible reason causing the the massive amount of
TCP connection? And is there anything configuration or tuning work that I can
do to solve this issue?
Any suggestion is highly appreciated.
Regards,
Li Wang
, also any suggestions, tests
and technical involvement are welcome, to make it ready to
be merged to the upstream.
Cheers,
Li Wang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is this useful?
http://techs.enovance.com/6424/back-from-the-summit-cephopenstack-integration
在2013-12-14,Kai log1...@yeah.net 写道:-原始邮件-
发件人: Kai log1...@yeah.net
发送时间: 2013年12月14日 星期六
收件人: ceph-us...@ceph.com ceph-us...@ceph.com
主题: [ceph-users] CEPH and Savanna Integration
Hi
ems is a remote machine?
Did you set up the corresponding directories: /var/lib/ceph/osd/ceph-0,
and called mkcephfs before?
You can also try starting osd manually by 'ceph-osd -i 0 -c
/etc/ceph/ceph.conf', then 'pgrep ceph-osd' to see if they are there,
then 'ceph -s' to check the health.