Hi Prabu,
Check the krbd version (and libceph) running in the kernel..You can try 
building the latest krbd source for the 7.1 kernel if this is an option for you.
As I mentioned in my earlier mail, please isolate problem the way I suggested 
if that seems reasonable to you.

Thanks & Regards
Somnath

From: gjprabu [mailto:gjpr...@zohocorp.com]
Sent: Sunday, January 03, 2016 10:53 PM
To: gjprabu
Cc: Somnath Roy; ceph-users; Siva Sokkumuthu
Subject: Re: [ceph-users] OSD size and performance

Hi Somnath,

               Just check the below details and let us know do you need any 
other information.

Regards
Prabu

---- On Sat, 02 Jan 2016 08:47:05 +0530 gjprabu 
<gjpr...@zohocorp.com<mailto:gjpr...@zohocorp.com>>wrote ----

Hi Somnath,

       Please check the details and help me on this issue.

Regards
Prabu

---- On Thu, 31 Dec 2015 12:50:36 +0530 gjprabu 
<gjpr...@zohocorp.com<mailto:gjpr...@zohocorp.com>>wrote ----



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Somnath,

         We are using RBD, please find linux and rbd versions. I agree this is 
related to client side issue. My though gone to backup because weekly once will 
take full backup not incremental at the time we found issue once but not sure.

Linux version
CentOS Linux release 7.1.1503 (Core)
Kernel : - 3.10.91

rbd --version
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)

rbd showmapped
id pool image           snap device
1  rbd  downloads  -    /dev/rbd1

rbd ls
downloads

Client server RBD mounted using ocfs2 file system.
/dev/rbd1                  ocfs2     9.6T  2.6T  7.0T  27% /data/downloads

Client level cluster configuration done with 5 clients and We are using below 
procedure in client node.

1) rbd map downloads --pool rbd --name client.admin -m 
192.168.112.192,192.168.112.193,192.168.112.194 -k 
/etc/ceph/ceph.client.admin.keyring


2)  Formatting rbd with ocfs2
mkfs.ocfs2 -b4K -C 4K -L label -T mail -N5 /dev/rbd/rbd/downloads

3) We have do ocfs2 client level configuration and start ocfs2 service.

4) mount /dev/rbd/rbd/downloads /data/downloads

 Please let me know do you need any other information.

Regards
Prabu




---- On Thu, 31 Dec 2015 01:04:39 +0530 Somnath Roy 
<somnath....@sandisk.com<mailto:somnath....@sandisk.com>>wrote ----



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Prabu,



I assume you are using krbd then..Could you please let us know the Linux 
version/flavor you are using ?

Krbd had some hang issues and supposed to be fixed with the latest versions 
available..Also, it could be due to OCFS2->krbd integration as well (?) 
..Handling data consistency is the responsibility of OCFS as krbd doesn’t 
guarantee that..So, I would suggest to do the following to root cause if your 
cluster is not into production.



1. Do a synthetic fio run  on krbd alone (or creating a filesystem on top) and 
see if you can reproduce the hang



2. Try building the latest krbd or upgrade your Linux version to get a newer 
krbd and see if it is still happening.





<< Also we are taking backup from client, we feel that could be the reason for 
this hang



I assume this is regular filesystem back up ? Why do you think this could be a 
problem ?



I think it is a client side issue , I doubt it could be because of large OSD 
size..





Thanks & Regards

Somnath



From: gjprabu [mailto:gjpr...@zohocorp.com<mailto:gjpr...@zohocorp.com>]
Sent: Wednesday, December 30, 2015 4:29 AM
To: gjprabu
Cc: Somnath Roy; ceph-users; Siva Sokkumuthu
Subject: Re: [ceph-users] OSD size and performance



Hi Somnath,



         Thanks for your reply. Current setup we are having client hang issue 
and its hang frequently and after reboot it is working, Client used to mount 
with OCFS2 file system for multiple concurrent client access for same data. 
Also we are taking backup from client, we feel that could be the reason for 
this hang.



Regards

Prabu







---- On Wed, 30 Dec 2015 11:33:20 +0530 Somnath Roy 
<somnath....@sandisk.com<mailto:somnath....@sandisk.com>>wrote ----



FYI , we are using 8TB SSD drive as OSD and not seeing any problem so far. 
Failure domain could be a concern for bigger OSDs.



Thanks & Regards

Somnath



From: ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 On Behalf Of gjprabu
Sent: Tuesday, December 29, 2015 9:38 PM
To: ceph-users
Cc: Siva Sokkumuthu
Subject: Re: [ceph-users] OSD size and performance



Hi Team,



         Anybody please clarify the below queries.



Regards

Prabu



---- On Tue, 29 Dec 2015 13:03:45 +0530 gjprabu 
<gjpr...@zohocorp.com<mailto:gjpr...@zohocorp.com>>wrote ----



Hi Team,



We are using ceph with 3 osd and 2 replicas. Each osd size is 13TB and current 
data is reached to 2.5TB (each osd). Because of this huge size do we face any 
problem.



OSD server configuration

Hard disk -- 13TB

RAM -- 96GB

CPU -- 2 CPU with multi 8 core processor.





Regards

Prabu







_______________________________________________

ceph-users mailing list

ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





_______________________________________________

ceph-users mailing list

ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to