[ceph-users] About available space ceph blue in store

2019-06-24 Thread gjprabu
Hi Team,             We have 9 OSD and when we run ceph osd df  its showing  TOTAL  SIZE 31 TiB  USE :- 13 TiB  AVAIL :- 18 TiB  %USE:- 42.49. When checked in client machine its showing Size :- 14T  USE:- 6.5T AVAIL  6.6T  around 3TB its missing.  We are using replication size is 2 . Any

Re: [ceph-users] ceph directory not accessible

2017-12-17 Thread gjprabu
Hi Yan, Sorry for late reply , it is kernel client and ceph version 10.2.3. Its not reproducible in other mounts. Regards Prabu GJ On Thu, 14 Dec 2017 12:18:52 +0530 Yan, Zheng uker...@gmail.com wrote On Thu, Dec 14, 2017 at 2:14 PM, gjprabu gjpr

[ceph-users] ceph directory not accessible

2017-12-13 Thread gjprabu
Hi Team, Today we found one of the client data were not accessible it shown "d? ? ? ??? backups" like this. Anybody faced same and any solution for this. [root@ /]# cd /data/build/repository/rep/lab [root@integ-hm11 gitlab]# ls -althr

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-22 Thread gjprabu
, 2017 at 12:11 AM gjprabu gjpr...@zohocorp.com wrote: Hi David, This is our current status. ~]# ceph status cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13 health HEALTH_WARN mds0: Client integ-hm3 failing to respond to cache pressure mds0

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-20 Thread gjprabu
Regards Prabu GJ On Mon, 20 Nov 2017 21:35:17 +0530 David Turner drakonst...@gmail.com wrote What is your current `ceph status` and `ceph df`? The status of your cluster has likely changed a bit in the last week. On Mon, Nov 20, 2017 at 6:00 AM gjprabu gjpr

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-20 Thread gjprabu
. The built-in reweighting scripts might help your data distribution. reweight-by-utilization On Sun, Nov 12, 2017, 11:41 AM gjprabu gjpr...@zohocorp.com wrote: Hi David, Thanks for your valuable reply , once complete the backfilling for new osd and will consider by increasing replica

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread gjprabu
great! But if not, at least do the metadata pool. If you lose an object in the data pool, you just lose that file. If you lose an object in the metadata pool, you might lose access to the entire CephFS volume. On Sun, Nov 12, 2017, 9:39 AM gjprabu <gjpr...@zohocorp.com> wrote:Hi Cassiano,       Tha

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread gjprabu
745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http://www.criann.fr mailto:sebastien.vigne...@criann.fr support: supp...@criann.fr Le 12 nov. 2017 à 13:29, gjprabu gjpr...@zohocorp.com a écrit : Hi

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread gjprabu
. +33 2 32 91 42 92 http://www.criann.fr mailto:sebastien.vigne...@criann.fr support: supp...@criann.fr Le 12 nov. 2017 à 13:29, gjprabu gjpr...@zohocorp.com a écrit : Hi Sebastien, Below is the query details. I am not that much expert and still learning . pg's are not stuck stat

Re: [ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread gjprabu
er Technopôle du Madrillet 745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http://www.criann.fr mailto:sebastien.vigne...@criann.fr support: supp...@criann.fr Le 12 nov. 2017 à 10:04, gjprabu gjpr...@zohocorp.com a écr

[ceph-users] OSD is near full and slow in accessing storage from client

2017-11-12 Thread gjprabu
Hi Team, We have ceph setup with 6 OSD and we got alert with 2 OSD is near full . We faced issue like slow in accessing ceph from client. So i have added 7th OSD and still 2 OSD is showing near full ( OSD.0 and OSD.4) , I have restarted ceph service in osd.0 and osd.4 . Kindly

[ceph-users] Ceph mount error and mds laggy

2017-08-15 Thread gjprabu
Hi Team, We are having a issue with mounting ceph and its toughing error "mount error 5 = Input/output error" and also MDS seems mds ceph-zstorage1 is laggy . Kindly provide help us on this issue. cluster a8c92ae6-6842-4fa2-bfc9-8cdefd28df5c health HEALTH_WARN

Re: [ceph-users] Ceph health warn MDS failing to respond to cache pressure

2017-05-10 Thread gjprabu
HI John, Thanks for you reply , we are using below version for client and MDS (ceph version 10.2.2) Regards Prabu GJ On Wed, 10 May 2017 12:29:06 +0530 John Spray jsp...@redhat.com wrote On Thu, May 4, 2017 at 7:28 AM, gjprabu gjpr...@zohocorp.com wrote: Hi

Re: [ceph-users] Ceph health warn MDS failing to respond to cache pressure

2017-05-10 Thread gjprabu
s to stall for a while. do at your own risk). Regards, Webert Lima DevOps Engineer at MAV Tecnologia Belo Horizonte - Brasil On Thu, May 4, 2017 at 3:28 AM, gjprabu gjpr...@zohocorp.com wrote: ___ ceph-users mailing list ceph-users@list

[ceph-users] Ceph health warn MDS failing to respond to cache pressure

2017-05-04 Thread gjprabu
Hi Team, We are running cephfs with 5 OSD and 3 Mon and 1 MDS. There is Heath Warn "failing to respond to cache pressure" . Kindly advise to fix this issue. cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13 health HEALTH_WARN mds0: Client

Re: [ceph-users] Is single MDS data recoverable

2017-04-26 Thread gjprabu
:43, gjprabu wrote: Hi Team, I am running cephfs setup with single MDS . Suppose in single MDS setup if the MDS goes down what will happen for data. Is it advisable to run multiple MDS. MDS data is in Ceph cluster itself. After MDS failure you can start another MDS on different

[ceph-users] Is single MDS data recoverable

2017-04-25 Thread gjprabu
Hi Team, I am running cephfs setup with single MDS . Suppose in single MDS setup if the MDS goes down what will happen for data. Is it advisable to run multiple MDS. Regards Prabu GJ ___ ceph-users mailing list

Re: [ceph-users] OSD disk concern

2017-04-19 Thread gjprabu
wrote Hi Prabhu, You can use both the OS and OSD in single SSD. Regards Shureshbabu On 19 April 2017 at 11:02, gjprabu gjpr...@zohocorp.com wrote: Hi Team, Ceph OSD disk allocation procedure suggested that "We recommend using a dedicated

[ceph-users] OSD disk concern

2017-04-18 Thread gjprabu
Hi Team, Ceph OSD disk allocation procedure suggested that "We recommend using a dedicated drive for the operating system and software, and one drive for each Ceph OSD Daemon you run on the host" We have only SSD hard disk and is it advisable to run OS and OSD on the same disk.

Re: [ceph-users] 回复: Re: ceph activation error

2017-04-18 Thread gjprabu
/lib/ceph/osd/ceph-0 is ln to /home/osd1 ? gorkts From: gjprabu Date: 2017-04-14 14:11 To: gjprabu CC: ceph-users; Tom Verhaeg Subject: Re: [ceph-users] ceph activation error Hi Tom, Is there any solution for this issue. Regards Prabu GJ On Thu, 13 Apr

Re: [ceph-users] ceph activation error

2017-04-13 Thread gjprabu
Hi Tom, Is there any solution for this issue. Regards Prabu GJ On Thu, 13 Apr 2017 18:31:36 +0530 gjprabu gjpr...@zohocorp.com wrote Hi Tom, Yes , its mounted . I am using centos7 and kernel version 3.10.0-229.el7.x86_64. /dev/xvda3

Re: [ceph-users] ceph activation error

2017-04-13 Thread gjprabu
, Is your OSD mounted correctly on the OS? Tom From: ceph-users ceph-users-boun...@lists.ceph.com on behalf of gjprabu gjpr...@zohocorp.com Sent: Thursday, April 13, 2017 1:13:34 PM To: ceph-users Subject: Re: [ceph-users] ceph activation error Hi All, Anybody facing

Re: [ceph-users] ceph activation error

2017-04-13 Thread gjprabu
Hi All, Anybody facing this similar issue. Regards Prabu GJ On Sat, 04 Mar 2017 09:50:35 +0530 gjprabu gjpr...@zohocorp.com wrote Hi Team, I am installing new ceph setup(jewel) and here while activating OSD its throughing below error

[ceph-users] ceph activation error

2017-03-03 Thread gjprabu
Hi Team, I am installing new ceph setup(jewel) and here while activating OSD its throughing below error. I am using partition based osd like /home/osd1 not a entire disk. Earlier installation one month back all are working fine but this time i getting error like below.

Re: [ceph-users] ceph osd activate error

2017-03-01 Thread gjprabu
14:38, gjprabu gjpr...@zohocorp.com escribió: Hi All, Anybody faced similar issue and is there any solution on this. Regards Prabu GJ On Wed, 01 Mar 2017 14:21:14 +0530 gjprabu gjpr...@zohocorp.com wrote Hi Team, We are installing new

Re: [ceph-users] ceph osd activate error

2017-03-01 Thread gjprabu
Hi All, Anybody faced similar issue and is there any solution on this. Regards Prabu GJ On Wed, 01 Mar 2017 14:21:14 +0530 gjprabu gjpr...@zohocorp.com wrote Hi Team, We are installing new ceph setup version jewel and while active tehe osd its

Re: [ceph-users] How to hide internal ip on ceph mount

2017-03-01 Thread gjprabu
Hi Robert, This container host will be provided to end user and we don't want to expose this ip to end users. Regards Prabu GJ On Wed, 01 Mar 2017 16:03:49 +0530 Robert Sander r.san...@heinlein-support.de wrote On 01.03.2017 10:54, gjprabu wrote: Hi, We try

Re: [ceph-users] How to hide internal ip on ceph mount

2017-03-01 Thread gjprabu
07:19, gjprabu wrote: > How to hide internal ip address on cephfs mounting. Due to > security reason we need to hide ip address. Also we are running docker > container in the base machine and which will shown the partition details > over there. Kindly let us know is there

Re: [ceph-users] How to hide internal ip on ceph mount

2017-03-01 Thread gjprabu
. mount.ceph will resolve whatever you provide, to IP address, and pass it to kernel. 2017-02-28 16:14 GMT+08:00 Robert Sander <r.san...@heinlein-support.de>: On 28.02.2017 07:19, gjprabu wrote: >              How to hide internal ip address on cephfs mounting. Due to > security re

[ceph-users] ceph osd activate error

2017-03-01 Thread gjprabu
Hi Team, We are installing new ceph setup version jewel and while active tehe osd its throughing error RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/data/osd1. We try to reinstall the osd machine and still same error .

[ceph-users] How to hide internal ip on ceph mount

2017-02-27 Thread gjprabu
Hi Team, How to hide internal ip address on cephfs mounting. Due to security reason we need to hide ip address. Also we are running docker container in the base machine and which will shown the partition details over there. Kindly let us know is there any solution for this.

Re: [ceph-users] ceph upgrade from hammer to jewel

2017-02-23 Thread gjprabu
Hi zhong, Yes, one of the client was not upgraded ceph-fuse version , now it's working thank you Regards Prabu GJ  On Thu, 23 Feb 2017 15:08:42 +0530 zhong2p...@gmail.com wrote are you sure you have ceph-fuse upgraded?  #ceph-fuse --version 2017-02-23 16:07 GMT+08:00 gjprabu <g

[ceph-users] ceph upgrade from hammer to jewel

2017-02-23 Thread gjprabu
Hi Team, We upgraded ceph version from 0.94.9 hammer to 10.2.5 jewel . Still some clients are showing older version while mounting with debug mode, is this caused any issue with OSD and MON. How to find the solution. New version and properly working client

Re: [ceph-users] cephfs quota

2016-12-18 Thread gjprabu
, quotas are only enabled once you mount with the --client-quota option. Cheers Goncalo From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of gjprabu [gjpr...@zohocorp.com] Sent: 16 December 2016 18:18 To: gjprabu Cc: ceph-users; Siva

Re: [ceph-users] cephfs quota

2016-12-15 Thread gjprabu
4.8T 10% /mnt/test du -sh test 1.9G test Regards Prabu GJ On Fri, 16 Dec 2016 11:18:46 +0530 gjprabu gjpr...@zohocorp.com wrote Hi David, Thanks for your mail, We are currently using Linux kernel CephFS, Is it possible to use ceph-fuse without disturbi

Re: [ceph-users] cephfs quota

2016-12-15 Thread gjprabu
Dec 2016 13:11:50 +0530, gjprabu wrote: We are using ceph version 10.2.4 (Jewel) and data's are mounted with cephfs file system in linux. We are trying to set quota for directory and files but its don't worked with below document. I have set 100mb for directory quota but after reaching keep

[ceph-users] cephfs quota

2016-12-14 Thread gjprabu
Hi Team, We are using ceph version 10.2.4 (Jewel) and data's are mounted with cephfs file system in linux. We are trying to set quota for directory and files but its don't worked with below document. I have set 100mb for directory quota but after reaching keep me allowing to put

Re: [ceph-users] cephfs toofull

2016-09-09 Thread gjprabu
Hi Gregory, My doubt has been cleared , by default cephfs will allow 82% of data to store and we can increase this value using osd_backfill_full_ratio. Regards Prabu GJ On Tue, 30 Aug 2016 17:05:34 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Gregory

Re: [ceph-users] cephfs toofull

2016-08-30 Thread gjprabu
Farnum gfar...@redhat.comwrote On Mon, Aug 29, 2016 at 12:53 AM, Christian Balzer ch...@gol.com wrote: On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote: Hi Chrishtian, Sorry for subject and thanks for your reply, gt; That's incredibly small in terms of OSD numbers

Re: [ceph-users] cephfs toofull

2016-08-29 Thread gjprabu
gjprabu wrote: Hi Chrishtian, Sorry for subject and thanks for your reply, gt; That's incredibly small in terms of OSD numbers, how many hosts? What replication size? Total host 5. Replicated size : 2 At this replication size you need to act and replace

Re: [ceph-users] cephfs toofull

2016-08-29 Thread gjprabu
...@gol.comwrote Hello, First of all, the subject is misleading. It doesn't matter if you're using CephFS, the toofull status is something that OSDs are in. On Mon, 29 Aug 2016 12:06:21 +0530 gjprabu wrote: Hi All, We are new with cephfs and we have 5 OSD

[ceph-users] cephfs toofull

2016-08-29 Thread gjprabu
Hi All, We are new with cephfs and we have 5 OSD and each size has 3.3TB. As of now data has been stored around 12 TB size, unfortunately osd5 went down and while remapped+backfill below error is showing even though we have around 2TB free spaces. Kindly provide the solution to

Re: [ceph-users] How to hide monitoring ip in cephfs mounted clients

2016-08-16 Thread gjprabu
Hi John, Any further update on this. Regards Prabu GJ On Thu, 28 Jul 2016 16:05:00 +0530 gjprabu gjpr...@zohocorp.comwrote Hi John, Thanks for your reply, Its a normal docker container can see the mount information like /dev/sda

Re: [ceph-users] How to hide monitoring ip in cephfs mounted clients

2016-07-28 Thread gjprabu
Hi All, Anybody facing similar issue, please let us know how to hide or avoid to use cephfs monitoring ip while mounting partition. Regards Prabu GJ On Wed, 20 Jul 2016 13:03:31 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Team, We are using

[ceph-users] How to hide monitoring ip in cephfs mounted clients

2016-07-20 Thread gjprabu
Hi Team, We are using chepfs file systems to mount client machines, here while mount we should provide monitoring ip address, is there any option to hide monitoring ips address in the mounted partition. We are using container in all ceph clients and which all able see monitoring

Re: [ceph-users] Recommended OSD size

2016-05-23 Thread gjprabu
Hi Christian, Please share your suggestion. Regards Prabu GJ On Sat, 21 May 2016 17:33:33 +0530 gjprabu gjpr...@zohocorp.comwrote HI Christian, Typo in my previous mail. Thanks for your reply, It will be very helpful if we get the details on osd per

Re: [ceph-users] Recommended OSD size

2016-05-21 Thread gjprabu
+0530 gjprabu gjpr...@zohocorp.comwrote Hi Christian, Thanks for your reply, our performance requirement will be like below. It will be very helpful is your provide the details for below scenario. As of now 6 TB data usage, in feature size will be 10 TB. Per ceph client

Re: [ceph-users] Recommended OSD size

2016-05-20 Thread gjprabu
:- kB/s 100144 We will be user around 10 clients Read :- kB/s 57726 x 10 clients Write :- kB/s 100144 x 10 clients Regards Prabu GJ On Fri, 13 May 2016 13:23:56 +0530 Christian Balzer ch...@gol.comwrote On Fri, 13 May 2016 12:38:05 +0530 gjprabu wrote

Re: [ceph-users] Ceph ANT task and file is empty

2016-05-12 Thread gjprabu
Hi Anybody facing similar issue. Please share the solution. Regards Prabu GJ On Wed, 11 May 2016 17:38:15 +0530 gjprabu gjpr...@zohocorp.comwrote Hi, We are using ceph rbd with cepfs mounted file system, Here while use ant copy task with in ceph shared

[ceph-users] Ceph ANT task and file is empty

2016-05-11 Thread gjprabu
Hi, We are using ceph rbd with cepfs mounted file system, Here while use ant copy task with in ceph shared directory the file is copied properly but after few seconds content become empty. Is there any solution for this issue. Regards Prabu GJ

[ceph-users] ceph mds error

2016-04-05 Thread gjprabu
Hi , We have configured ceph rbd with cephfs filesystem and we are getting below error on MDS, also cephfs mounted partition size is showing double from the actual data 500 GB and used size is showing 1.1TB. Is this because of replica , if so we have replica 2. Kindly please let us

Re: [ceph-users] OSD size and performance

2016-01-05 Thread gjprabu
Hi srinivas, Do we have any other options to check this issue. Regads Prabu On Mon, 04 Jan 2016 17:32:03 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Srinivas, I am not sure RBD support SCSI but OCFS2 having that capability to lock and unlock while write

Re: [ceph-users] OSD size and performance

2016-01-04 Thread gjprabu
device should support SCSI reservation, so that OCFS can take write lock while write on particular client to avoid corruption. Thanks, Srinivas From: gjprabu [mailto:gjpr...@zohocorp.com] Sent: Monday, January 04, 2016 1:40 PM To: Srinivasula Maram Cc: Somnath Roy; ceph-users; Siva

Re: [ceph-users] OSD size and performance

2016-01-04 Thread gjprabu
reservation support for cluster file system. Thanks, Srinivas From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Somnath Roy Sent: Monday, January 04, 2016 12:29 PM To: gjprabu Cc: ceph-users; Siva Sokkumuthu Subject: Re: [ceph-users] OSD size and performance

Re: [ceph-users] OSD size and performance

2016-01-03 Thread gjprabu
Hi Somnath, Just check the below details and let us know do you need any other information. Regards Prabu On Sat, 02 Jan 2016 08:47:05 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Somnath, Please check the details and help me on this issue

Re: [ceph-users] OSD size and performance

2016-01-03 Thread gjprabu
Somnath From: gjprabu [mailto:gjpr...@zohocorp.com] Sent: Sunday, January 03, 2016 10:53 PM To: gjprabu Cc: Somnath Roy; ceph-users; Siva Sokkumuthu Subject: Re: [ceph-users] OSD size and performance Hi Somnath, Just check the below details and let us know do you

Re: [ceph-users] OSD size and performance

2016-01-01 Thread gjprabu
Hi Somnath, Please check the details and help me on this issue. Regards Prabu On Thu, 31 Dec 2015 12:50:36 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Somnath, We are using RBD, please find linux and rbd versions. I agree this is related

Re: [ceph-users] OSD size and performance

2015-12-30 Thread gjprabu
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of gjprabu Sent: Tuesday, December 29, 2015 9:38 PM To: ceph-users Cc: Siva Sokkumuthu Subject: Re: [ceph-users] OSD size and performance Hi Team, Anybody please clarify the below queries. Regards

Re: [ceph-users] OSD size and performance

2015-12-29 Thread gjprabu
Hi Team, Anybody please clarify the below quires. Regards Prabu On Tue, 29 Dec 2015 13:03:45 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Team, We are using ceph with 3 osd and 2 replicas. Each osd size is 13TB and current data is reached to 2.5TB

[ceph-users] OSD size and performance

2015-12-28 Thread gjprabu
Hi Team, We are using ceph with 3 osd and 2 replicas. Each osd size is 13TB and current data is reached to 2.5TB (each osd). Because of this huge size do we face any problem. OSD server configuration Hard disk -- 13TB RAM -- 96GB CPU -- 2 CPU with multi 8 core processor.

Re: [ceph-users] ceph new osd addition and client disconnected

2015-11-03 Thread gjprabu
ctay...@eyonic.com wrote On 2015-11-02 10:19 pm, gjprabu wrote: Hi Taylor, I have checked DNS name and all host resolve to the correct IP. MTU size is 1500 in switch level configuration done. There is no firewall/ selinux is running currently. Also we would like

Re: [ceph-users] ceph new osd addition and client disconnected

2015-11-02 Thread gjprabu
een hosts? Hope that helps, Chris On 2015-11-02 9:18 pm, gjprabu wrote: Hi, Anybody please help me on this issue. Regards Prabu On Mon, 02 Nov 2015 17:54:27 +0530 gjprabu gjpr...@zohocorp.com wrote Hi Team, We have ceph setup with 2 OSD and

Re: [ceph-users] ceph new osd addition and client disconnected

2015-11-02 Thread gjprabu
Hi, Anybody please help me on this issue. Regards Prabu On Mon, 02 Nov 2015 17:54:27 +0530 gjprabu gjpr...@zohocorp.com wrote Hi Team, We have ceph setup with 2 OSD and replica 2 and it is mounted with ocfs2 clients and its working. When we added new

[ceph-users] ceph new osd addition and client disconnected

2015-11-02 Thread gjprabu
Hi Team, We have ceph setup with 2 OSD and replica 2 and it is mounted with ocfs2 clients and its working. When we added new osd all the clients rbd mapped device disconnected and got hanged by running rbd ls or rbd map command. We waited for long hours to scale the new osd size

Re: [ceph-users] ceph same rbd on multiple client

2015-10-23 Thread gjprabu
On 15-10-23 08:40, gjprabu wrote: ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com Hi Frederic, Can you give me some solution, we are spending more time to solve th

Re: [ceph-users] ceph same rbd on multiple client

2015-10-22 Thread gjprabu
have unconccurent writes though Sent from TypeMail On Oct 15, 2015, at 1:53 AM, gjprabu gjpr...@zohocorp.com wrote: Hi Tyler, Can please send me the next setup action to be taken on this issue. Regards Prabu On Wed, 14 Oct 2015 13:43:29 +0530 gjprabu gjpr...@zohocorp.com

Re: [ceph-users] ceph same rbd on multiple client

2015-10-14 Thread gjprabu
are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. From: "gjprabu" gjpr...@zohocorp.com To: "Frédéric Nass" frederic.n...@univ-lorraine.fr Cc: "ceph-users@lists.ceph.com&q

Re: [ceph-users] ceph same rbd on multiple client

2015-10-13 Thread gjprabu
can use clustered filesystem like OCFS2 or GFS2 on top of RBD mappings so that each host can access the same device and clustered filesystem. Regards, Frédéric. Le 21/05/2015 16:10, gjprabu a écrit : -- Frédéric Nass Sous direction des Infrastructures, Direction du Numérique

Re: [ceph-users] input / output error

2015-10-09 Thread gjprabu
Hi All, Anybody pls help me on this issue. Regards Prabu On Thu, 08 Oct 2015 12:35:27 +0530 gjprabu gjpr...@zohocorp.com wrote Hi All, We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors simultaneously while move the data's

[ceph-users] input / output error

2015-10-08 Thread gjprabu
Hi All, We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors simultaneously while move the data's in the same disk (Copying is not having any problem). Temporary we remount the partition and the issue get resolved but after sometime problem again reproduced. If

[ceph-users] ceph admin node

2015-10-06 Thread gjprabu
Hi Team, If i lost admin node, what will be the recovery procedure with same keys. Regards Prabu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph distributed osd

2015-09-01 Thread gjprabu
+0530 gjprabu gjpr...@zohocorp.com wrote Hi Robert, Thanks for your replay. We understand the senarios. Regards Prabu On Thu, 20 Aug 2015 00:15:41 +0530 rob...@leblancnet.us wrote -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 By default, all pools

Re: [ceph-users] ceph distributed osd

2015-08-19 Thread gjprabu
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph distributed osd

2015-08-18 Thread gjprabu
. As everything is thin-provisioned you can create a RBD with an arbitrary size - I've create one with 1PB when the cluster only had 600G/Raw available. On Mon, Aug 17, 2015 at 1:18 PM, gjprabu lt;gjpr...@zohocorp.comgt; wrote: Hi All, Anybody can help on this issue. Regards

Re: [ceph-users] ceph distributed osd

2015-08-17 Thread gjprabu
Hi All, Anybody can help on this issue. Regards Prabu On Mon, 17 Aug 2015 12:08:28 +0530 gjprabu lt;gjpr...@zohocorp.comgt; wrote Hi All, Also please find osd information. ceph osd dump | grep 'replicated size' pool 2 'repo' replicated size

Re: [ceph-users] ceph distributed osd

2015-08-17 Thread gjprabu
19:42:11 +0530 gjprabu lt;gjpr...@zohocorp.comgt; wrote Dear Team, We are using two ceph OSD with replica 2 and it is working properly. Here my doubt is (Pool A -image size will be 10GB) and its replicated with two OSD, what will happen suppose if the size reached the limit

Re: [ceph-users] ceph distributed osd

2015-08-17 Thread gjprabu
, 17 Aug 2015 11:58:55 +0530 gjprabu lt;gjpr...@zohocorp.comgt; wrote Hi All, We need to test three OSD and one image with replica 2(size 1GB). While testing data is not writing above 1GB. Is there any option to write on third OSD. ceph osd pool get repo pg_num pg_num: 126

[ceph-users] ceph distributed osd

2015-08-13 Thread gjprabu
Dear Team, We are using two ceph OSD with replica 2 and it is working properly. Here my doubt is (Pool A -image size will be 10GB) and its replicated with two OSD, what will happen suppose if the size reached the limit, Is there any chance to make the data to continue writing in

[ceph-users] ceph osd mounting issue with ocfs2

2015-07-30 Thread gjprabu
Hi All, We are using ceph with two OSD and three clients. Clients try to mount with OCFS2 file system. Here when i start mounting only two clients i can able to mount properly and third client giving below errors. Some time i can able to mount third client but data not sync to third

Re: [ceph-users] Ceph OSD with OCFS2

2015-06-16 Thread gjprabu
.. You are cloning git source repository on top of RBD + OCFS2 and that is taking extra time ? Thanks amp; Regards Somnath From: gjprabu [mailto:gjpr...@zohocorp.com] Sent: Monday, June 15, 2015 9:39 PM To: gjprabu Cc: Somnath Roy; Kamala Subramani; ceph-users@lists.ceph.com

Re: [ceph-users] Ceph OSD with OCFS2

2015-06-15 Thread gjprabu
. lt;lt; Also please let us know the reason ( Extra 2-3 mins is taken for hg / git repository operation like clone , pull , checkout and update.) Could you please explain a bit what you are trying to do here ? Thanks amp; Regards Somnath From: gjprabu [mailto:gjpr...@zohocorp.com

Re: [ceph-users] Ceph OSD with OCFS2

2015-06-15 Thread gjprabu
? In ceph shared directory , we will clone source repository then will access the same from ceph client . Regards Prabu On Mon, 15 Jun 2015 17:16:11 +0530 gjprabu lt;gjpr...@zohocorp.comgt; wrote Hi The size differ issue is solved, This is related to ocfs2 format option and -C

Re: [ceph-users] Ceph OSD with OCFS2

2015-06-12 Thread gjprabu
removing a rbd image ? If you are removing entire pool , that should be fast and do deletes data async way I guess. Thanks amp; Regards Somnath From: gjprabu [mailto:gjpr...@zohocorp.com] Sent: Thursday, June 11, 2015 6:38 AM To: Somnath Roy Cc: ceph-users@lists.ceph.com; Kamala

Re: [ceph-users] Ceph OSD with OCFS2

2015-06-11 Thread gjprabu
...@lists.ceph.com] On Behalf Of gjprabu Sent: Friday, June 05, 2015 3:07 AM To: ceph-users@lists.ceph.com Cc: Kamala Subramani; Siva Sokkumuthu Subject: [ceph-users] Ceph OSD with OCFS2 Dear Team, We are newly using ceph with two OSD and two clients. Both clients are mounted

[ceph-users] Ceph OSD with OCFS2

2015-06-05 Thread gjprabu
Dear Team, We are newly using ceph with two OSD and two clients. Both clients are mounted with OCFS2 file system. Here suppose i transfer 500MB of data in the client its showing double of the size 1GB after finish data transfer. Is the behavior is correct or is there any solution for

[ceph-users] Ceph RBD and Cephfuse

2015-06-02 Thread gjprabu
Hi Team, We are newly using ceph with two OSD and two clients, our requirement is when we write date through clients it should see in another client also, storage is mounted using rbd because we running git clone with large amount of small file and it is fast when use rbd mount, but

Re: [ceph-users] Ceph RBD and Cephfuse

2015-06-02 Thread gjprabu
On 2015-06-02T15:40:54, gjprabu lt;gjpr...@zohocorp.comgt; wrote: gt; Hi Team, gt; gt; We are newly using ceph with two OSD and two clients, our requirement is when we write date through clients it should see in another client also, storage is mounted using rbd because we running git

Re: [ceph-users] ceph same rbd on multiple client

2015-05-22 Thread gjprabu
NFS or Samba shares. The exact setup depends on what you need. Cheers, Vasily. On Thu, May 21, 2015 at 6:47 PM, gjprabu lt;gjpr...@zohocorp.comgt; wrote: Hi Angapov, I have seen below message in ceph official sites. How it's consider to use in production. ImportantCephFS currently lacks

[ceph-users] ceph same rbd on multiple client

2015-05-21 Thread gjprabu
Hi All, We are using rbd and map the same rbd image to the rbd device on two different client but i can't see the data until i umount and mount -a partition. Kindly share the solution for this issue. Example create rbd image named foo map foo to /dev/rbd0 on server A, mount /dev/rbd0

Re: [ceph-users] ceph same rbd on multiple client

2015-05-21 Thread gjprabu
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph same rbd on multiple client

2015-05-21 Thread gjprabu
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com