Hi Team,
We have 9 OSD and when we run ceph osd df its showing TOTAL SIZE
31 TiB USE :- 13 TiB AVAIL :- 18 TiB %USE:- 42.49. When checked in client
machine its showing Size :- 14T USE:- 6.5T AVAIL 6.6T around 3TB its
missing. We are using replication size is 2 . Any
Hi Yan,
Sorry for late reply , it is kernel client and ceph version 10.2.3.
Its not reproducible in other mounts.
Regards
Prabu GJ
On Thu, 14 Dec 2017 12:18:52 +0530 Yan, Zheng uker...@gmail.com
wrote
On Thu, Dec 14, 2017 at 2:14 PM, gjprabu gjpr
Hi Team,
Today we found one of the client data were not accessible it
shown "d? ? ? ??? backups" like this. Anybody
faced same and any solution for this.
[root@ /]# cd /data/build/repository/rep/lab
[root@integ-hm11 gitlab]# ls -althr
, 2017 at 12:11 AM gjprabu gjpr...@zohocorp.com wrote:
Hi David,
This is our current status.
~]# ceph status
cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13
health HEALTH_WARN
mds0: Client integ-hm3 failing to respond to cache pressure
mds0
Regards
Prabu GJ
On Mon, 20 Nov 2017 21:35:17 +0530 David Turner
drakonst...@gmail.com wrote
What is your current `ceph status` and `ceph df`? The status of your cluster
has likely changed a bit in the last week.
On Mon, Nov 20, 2017 at 6:00 AM gjprabu gjpr
.
The built-in reweighting scripts might help your data distribution.
reweight-by-utilization
On Sun, Nov 12, 2017, 11:41 AM gjprabu gjpr...@zohocorp.com wrote:
Hi David,
Thanks for your valuable reply , once complete the backfilling for new osd and
will consider by increasing replica
great! But if not, at least do the
metadata pool. If you lose an object in the data pool, you just lose that file.
If you lose an object in the metadata pool, you might lose access to the entire
CephFS volume. On Sun, Nov 12, 2017, 9:39 AM gjprabu <gjpr...@zohocorp.com>
wrote:Hi Cassiano, Tha
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr
Le 12 nov. 2017 à 13:29, gjprabu gjpr...@zohocorp.com a écrit :
Hi
. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr
Le 12 nov. 2017 à 13:29, gjprabu gjpr...@zohocorp.com a écrit :
Hi Sebastien,
Below is the query details. I am not that much expert and still learning .
pg's are not stuck stat
er
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr
Le 12 nov. 2017 à 10:04, gjprabu gjpr...@zohocorp.com a écr
Hi Team,
We have ceph setup with 6 OSD and we got alert with 2 OSD is near full
. We faced issue like slow in accessing ceph from client. So i have added 7th
OSD and still 2 OSD is showing near full ( OSD.0 and OSD.4) , I have restarted
ceph service in osd.0 and osd.4 . Kindly
Hi Team,
We are having a issue with mounting ceph and its toughing error "mount
error 5 = Input/output error" and also MDS seems mds ceph-zstorage1 is laggy .
Kindly provide help us on this issue.
cluster a8c92ae6-6842-4fa2-bfc9-8cdefd28df5c
health HEALTH_WARN
HI John,
Thanks for you reply , we are using below version for client and MDS (ceph
version 10.2.2)
Regards
Prabu GJ
On Wed, 10 May 2017 12:29:06 +0530 John Spray jsp...@redhat.com
wrote
On Thu, May 4, 2017 at 7:28 AM, gjprabu gjpr...@zohocorp.com wrote:
Hi
s to stall for a while. do at your own risk).
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
On Thu, May 4, 2017 at 3:28 AM, gjprabu gjpr...@zohocorp.com wrote:
___
ceph-users mailing list
ceph-users@list
Hi Team,
We are running cephfs with 5 OSD and 3 Mon and 1 MDS. There is Heath
Warn "failing to respond to cache pressure" . Kindly advise to fix this issue.
cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13
health HEALTH_WARN
mds0: Client
:43, gjprabu wrote:
Hi Team,
I am running cephfs setup with single MDS . Suppose in single MDS
setup if the MDS goes down what will happen for data. Is it advisable to run
multiple MDS.
MDS data is in Ceph cluster itself. After MDS failure you can start another MDS
on different
Hi Team,
I am running cephfs setup with single MDS . Suppose in single MDS
setup if the MDS goes down what will happen for data. Is it advisable to run
multiple MDS.
Regards
Prabu GJ
___
ceph-users mailing list
wrote
Hi Prabhu,
You can use both the OS and OSD in single SSD.
Regards
Shureshbabu
On 19 April 2017 at 11:02, gjprabu gjpr...@zohocorp.com wrote:
Hi Team,
Ceph OSD disk allocation procedure suggested that "We recommend using
a dedicated
Hi Team,
Ceph OSD disk allocation procedure suggested that "We recommend using
a dedicated drive for the operating system and software, and one drive for each
Ceph OSD Daemon you run on the host" We have only SSD hard disk and is it
advisable to run OS and OSD on the same disk.
/lib/ceph/osd/ceph-0 is ln to /home/osd1 ?
gorkts
From: gjprabu
Date: 2017-04-14 14:11
To: gjprabu
CC: ceph-users; Tom Verhaeg
Subject: Re: [ceph-users] ceph activation error
Hi Tom,
Is there any solution for this issue.
Regards
Prabu GJ
On Thu, 13 Apr
Hi Tom,
Is there any solution for this issue.
Regards
Prabu GJ
On Thu, 13 Apr 2017 18:31:36 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Tom,
Yes , its mounted . I am using centos7 and kernel version
3.10.0-229.el7.x86_64.
/dev/xvda3
,
Is your OSD mounted correctly on the OS?
Tom
From: ceph-users ceph-users-boun...@lists.ceph.com on behalf of gjprabu
gjpr...@zohocorp.com
Sent: Thursday, April 13, 2017 1:13:34 PM
To: ceph-users
Subject: Re: [ceph-users] ceph activation error
Hi All,
Anybody facing
Hi All,
Anybody facing this similar issue.
Regards
Prabu GJ
On Sat, 04 Mar 2017 09:50:35 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Team,
I am installing new ceph setup(jewel) and here while activating OSD
its throughing below error
Hi Team,
I am installing new ceph setup(jewel) and here while activating OSD
its throughing below error.
I am using partition based osd like /home/osd1 not a entire disk.
Earlier installation one month back all are working fine but this time i
getting error like below.
14:38, gjprabu gjpr...@zohocorp.com escribió:
Hi All,
Anybody faced similar issue and is there any solution on this.
Regards
Prabu GJ
On Wed, 01 Mar 2017 14:21:14 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Team,
We are installing new
Hi All,
Anybody faced similar issue and is there any solution on this.
Regards
Prabu GJ
On Wed, 01 Mar 2017 14:21:14 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Team,
We are installing new ceph setup version jewel and while active tehe osd its
Hi Robert,
This container host will be provided to end user and we don't want to expose
this ip to end users.
Regards
Prabu GJ
On Wed, 01 Mar 2017 16:03:49 +0530 Robert Sander
r.san...@heinlein-support.de wrote
On 01.03.2017 10:54, gjprabu wrote:
Hi,
We try
07:19, gjprabu wrote:
> How to hide internal ip address on cephfs mounting. Due to
> security reason we need to hide ip address. Also we are running docker
> container in the base machine and which will shown the partition details
> over there. Kindly let us know is there
.
mount.ceph will resolve whatever you provide, to IP address, and pass it to
kernel.
2017-02-28 16:14 GMT+08:00 Robert Sander <r.san...@heinlein-support.de>:
On 28.02.2017 07:19, gjprabu wrote:
> How to hide internal ip address on cephfs mounting. Due to
> security re
Hi Team,
We are installing new ceph setup version jewel and while active tehe osd its
throughing error RuntimeError: Failed to execute command: /usr/sbin/ceph-disk
-v activate --mark-init systemd --mount /home/data/osd1. We try to reinstall
the osd machine and still same error .
Hi Team,
How to hide internal ip address on cephfs mounting. Due to
security reason we need to hide ip address. Also we are running docker
container in the base machine and which will shown the partition details over
there. Kindly let us know is there any solution for this.
Hi zhong,
Yes, one of the client was not upgraded ceph-fuse version , now it's working
thank you Regards
Prabu GJ
On Thu, 23 Feb 2017 15:08:42 +0530 zhong2p...@gmail.com wrote
are you sure you have ceph-fuse upgraded?
#ceph-fuse --version
2017-02-23 16:07 GMT+08:00 gjprabu <g
Hi Team,
We upgraded ceph version from 0.94.9 hammer to 10.2.5 jewel . Still
some clients are showing older version while mounting with debug mode, is this
caused any issue with OSD and MON. How to find the solution.
New version and properly working client
, quotas are only enabled once you mount with the
--client-quota option.
Cheers
Goncalo
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of gjprabu
[gjpr...@zohocorp.com]
Sent: 16 December 2016 18:18
To: gjprabu
Cc: ceph-users; Siva
4.8T 10% /mnt/test
du -sh test
1.9G test
Regards
Prabu GJ
On Fri, 16 Dec 2016 11:18:46 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi David,
Thanks for your mail, We are currently using Linux kernel CephFS,
Is it possible to use ceph-fuse without disturbi
Dec 2016 13:11:50 +0530, gjprabu wrote:
We are using ceph version 10.2.4 (Jewel) and data's are mounted with
cephfs file system in linux. We are trying to set quota for directory and files
but its don't worked with below document. I have set 100mb for directory quota
but after reaching keep
Hi Team,
We are using ceph version 10.2.4 (Jewel) and data's are mounted with
cephfs file system in linux. We are trying to set quota for directory and files
but its don't worked with below document. I have set 100mb for directory quota
but after reaching keep me allowing to put
Hi Gregory,
My doubt has been cleared , by default cephfs will allow 82% of
data to store and we can increase this value using osd_backfill_full_ratio.
Regards
Prabu GJ
On Tue, 30 Aug 2016 17:05:34 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Gregory
Farnum
gfar...@redhat.comwrote
On Mon, Aug 29, 2016 at 12:53 AM, Christian Balzer ch...@gol.com wrote:
On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:
Hi Chrishtian,
Sorry for subject and thanks for your reply,
gt; That's incredibly small in terms of OSD numbers
gjprabu wrote:
Hi Chrishtian,
Sorry for subject and thanks for your reply,
gt; That's incredibly small in terms of OSD numbers, how many hosts?
What replication size?
Total host 5.
Replicated size : 2
At this replication size you need to act and replace
...@gol.comwrote
Hello,
First of all, the subject is misleading.
It doesn't matter if you're using CephFS, the toofull status is something
that OSDs are in.
On Mon, 29 Aug 2016 12:06:21 +0530 gjprabu wrote:
Hi All,
We are new with cephfs and we have 5 OSD
Hi All,
We are new with cephfs and we have 5 OSD and each size has 3.3TB. As of
now data has been stored around 12 TB size, unfortunately osd5 went down and
while remapped+backfill below error is showing even though we have around 2TB
free spaces. Kindly provide the solution to
Hi John,
Any further update on this.
Regards
Prabu GJ
On Thu, 28 Jul 2016 16:05:00 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi John,
Thanks for your reply, Its a normal docker container can see the mount
information like /dev/sda
Hi All,
Anybody facing similar issue, please let us know how to hide or
avoid to use cephfs monitoring ip while mounting partition.
Regards
Prabu GJ
On Wed, 20 Jul 2016 13:03:31 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Team,
We are using
Hi Team,
We are using chepfs file systems to mount client machines, here
while mount we should provide monitoring ip address, is there any option to
hide monitoring ips address in the mounted partition. We are using container in
all ceph clients and which all able see monitoring
Hi Christian,
Please share your suggestion.
Regards
Prabu GJ
On Sat, 21 May 2016 17:33:33 +0530 gjprabu
gjpr...@zohocorp.comwrote
HI Christian,
Typo in my previous mail.
Thanks for your reply, It will be very helpful if we get the details on osd per
+0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Christian,
Thanks for your reply, our performance requirement will be like below.
It will be very helpful is your provide the details for below scenario.
As of now 6 TB data usage, in feature size will be 10 TB.
Per ceph client
:- kB/s 100144
We will be user around 10 clients
Read :- kB/s 57726 x 10 clients
Write :- kB/s 100144 x 10 clients
Regards
Prabu GJ
On Fri, 13 May 2016 13:23:56 +0530 Christian Balzer
ch...@gol.comwrote
On Fri, 13 May 2016 12:38:05 +0530 gjprabu wrote
Hi
Anybody facing similar issue. Please share the solution.
Regards
Prabu GJ
On Wed, 11 May 2016 17:38:15 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi,
We are using ceph rbd with cepfs mounted file system, Here while use ant copy
task with in ceph shared
Hi,
We are using ceph rbd with cepfs mounted file system, Here while use ant copy
task with in ceph shared directory the file is copied properly but after few
seconds content become empty. Is there any solution for this issue.
Regards
Prabu GJ
Hi ,
We have configured ceph rbd with cephfs filesystem and we are getting
below error on MDS, also cephfs mounted partition size is showing double from
the actual data 500 GB and used size is showing 1.1TB. Is this because of
replica , if so we have replica 2. Kindly please let us
Hi srinivas,
Do we have any other options to check this issue.
Regads
Prabu
On Mon, 04 Jan 2016 17:32:03 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Srinivas,
I am not sure RBD support SCSI but OCFS2 having that capability to lock and
unlock while write
device should support SCSI reservation, so that OCFS can take
write lock while write on particular client to avoid corruption.
Thanks,
Srinivas
From: gjprabu [mailto:gjpr...@zohocorp.com]
Sent: Monday, January 04, 2016 1:40 PM
To: Srinivasula Maram
Cc: Somnath Roy; ceph-users; Siva
reservation support
for cluster file system.
Thanks,
Srinivas
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Somnath Roy
Sent: Monday, January 04, 2016 12:29 PM
To: gjprabu
Cc: ceph-users; Siva Sokkumuthu
Subject: Re: [ceph-users] OSD size and performance
Hi Somnath,
Just check the below details and let us know do you need any
other information.
Regards
Prabu
On Sat, 02 Jan 2016 08:47:05 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Somnath,
Please check the details and help me on this issue
Somnath
From: gjprabu [mailto:gjpr...@zohocorp.com]
Sent: Sunday, January 03, 2016 10:53 PM
To: gjprabu
Cc: Somnath Roy; ceph-users; Siva Sokkumuthu
Subject: Re: [ceph-users] OSD size and performance
Hi Somnath,
Just check the below details and let us know do you
Hi Somnath,
Please check the details and help me on this issue.
Regards
Prabu
On Thu, 31 Dec 2015 12:50:36 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Somnath,
We are using RBD, please find linux and rbd versions. I agree this is
related
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of gjprabu
Sent: Tuesday, December 29, 2015 9:38 PM
To: ceph-users
Cc: Siva Sokkumuthu
Subject: Re: [ceph-users] OSD size and performance
Hi Team,
Anybody please clarify the below queries.
Regards
Hi Team,
Anybody please clarify the below quires.
Regards
Prabu
On Tue, 29 Dec 2015 13:03:45 +0530 gjprabu
gjpr...@zohocorp.comwrote
Hi Team,
We are using ceph with 3 osd and 2 replicas. Each osd size is 13TB
and current data is reached to 2.5TB
Hi Team,
We are using ceph with 3 osd and 2 replicas. Each osd size is 13TB
and current data is reached to 2.5TB (each osd). Because of this huge size do
we face any problem.
OSD server configuration
Hard disk -- 13TB
RAM -- 96GB
CPU -- 2 CPU with multi 8 core processor.
ctay...@eyonic.com
wrote
On 2015-11-02 10:19 pm, gjprabu wrote:
Hi Taylor,
I have checked DNS name and all host resolve to the correct IP. MTU
size is 1500 in switch level configuration done. There is no firewall/
selinux is running currently.
Also we would like
een
hosts?
Hope that helps,
Chris
On 2015-11-02 9:18 pm, gjprabu wrote:
Hi,
Anybody please help me on this issue.
Regards
Prabu
On Mon, 02 Nov 2015 17:54:27 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Team,
We have ceph setup with 2 OSD and
Hi,
Anybody please help me on this issue.
Regards
Prabu
On Mon, 02 Nov 2015 17:54:27 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Team,
We have ceph setup with 2 OSD and replica 2 and it is mounted with ocfs2
clients and its working. When we added new
Hi Team,
We have ceph setup with 2 OSD and replica 2 and it is mounted with ocfs2
clients and its working. When we added new osd all the clients rbd mapped
device disconnected and got hanged by running rbd ls or rbd map command. We
waited for long hours to scale the new osd size
On 15-10-23 08:40, gjprabu wrote:
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Frederic,
Can you give me some solution, we are spending more time to solve
th
have unconccurent writes
though
Sent from TypeMail
On Oct 15, 2015, at 1:53 AM, gjprabu gjpr...@zohocorp.com wrote:
Hi Tyler,
Can please send me the next setup action to be taken on this issue.
Regards
Prabu
On Wed, 14 Oct 2015 13:43:29 +0530 gjprabu gjpr...@zohocorp.com
are notified
that disclosing, copying, distributing or taking any action in reliance on the
contents of this information is strictly prohibited.
From: "gjprabu" gjpr...@zohocorp.com
To: "Frédéric Nass" frederic.n...@univ-lorraine.fr
Cc: "ceph-users@lists.ceph.com&q
can use clustered filesystem like OCFS2 or GFS2 on top
of RBD mappings so that each host can access the same device and clustered
filesystem.
Regards,
Frédéric.
Le 21/05/2015 16:10, gjprabu a écrit :
-- Frédéric Nass Sous direction des Infrastructures, Direction du Numérique
Hi All,
Anybody pls help me on this issue.
Regards
Prabu
On Thu, 08 Oct 2015 12:35:27 +0530 gjprabu gjpr...@zohocorp.com
wrote
Hi All,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors
simultaneously while move the data's
Hi All,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors
simultaneously while move the data's in the same disk (Copying is not having
any problem). Temporary we remount the partition and the issue get resolved but
after sometime problem again reproduced. If
Hi Team,
If i lost admin node, what will be the recovery procedure with same keys.
Regards
Prabu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
+0530 gjprabu gjpr...@zohocorp.com
wrote
Hi Robert,
Thanks for your replay. We understand the senarios.
Regards
Prabu
On Thu, 20 Aug 2015 00:15:41 +0530 rob...@leblancnet.us wrote
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
By default, all pools
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
As everything is thin-provisioned you can create a RBD with an arbitrary size -
I've create one with 1PB when the cluster only had 600G/Raw available.
On Mon, Aug 17, 2015 at 1:18 PM, gjprabu lt;gjpr...@zohocorp.comgt; wrote:
Hi All,
Anybody can help on this issue.
Regards
Hi All,
Anybody can help on this issue.
Regards
Prabu
On Mon, 17 Aug 2015 12:08:28 +0530 gjprabu lt;gjpr...@zohocorp.comgt;
wrote
Hi All,
Also please find osd information.
ceph osd dump | grep 'replicated size'
pool 2 'repo' replicated size
19:42:11 +0530 gjprabu lt;gjpr...@zohocorp.comgt;
wrote
Dear Team,
We are using two ceph OSD with replica 2 and it is working properly.
Here my doubt is (Pool A -image size will be 10GB) and its replicated with two
OSD, what will happen suppose if the size reached the limit
, 17 Aug 2015 11:58:55 +0530 gjprabu lt;gjpr...@zohocorp.comgt;
wrote
Hi All,
We need to test three OSD and one image with replica 2(size 1GB). While
testing data is not writing above 1GB. Is there any option to write on third
OSD.
ceph osd pool get repo pg_num
pg_num: 126
Dear Team,
We are using two ceph OSD with replica 2 and it is working properly.
Here my doubt is (Pool A -image size will be 10GB) and its replicated with two
OSD, what will happen suppose if the size reached the limit, Is there any
chance to make the data to continue writing in
Hi All,
We are using ceph with two OSD and three clients. Clients try to mount with
OCFS2 file system. Here when i start mounting only two clients i can able to
mount properly and third client giving below errors. Some time i can able to
mount third client but data not sync to third
..
You are cloning git source repository on top of RBD + OCFS2 and that is taking
extra time ?
Thanks amp; Regards
Somnath
From: gjprabu [mailto:gjpr...@zohocorp.com]
Sent: Monday, June 15, 2015 9:39 PM
To: gjprabu
Cc: Somnath Roy; Kamala Subramani; ceph-users@lists.ceph.com
.
lt;lt; Also please let us know the reason ( Extra 2-3 mins is taken for hg /
git repository operation like clone , pull , checkout and update.)
Could you please explain a bit what you are trying to do here ?
Thanks amp; Regards
Somnath
From: gjprabu [mailto:gjpr...@zohocorp.com
?
In ceph shared directory , we will clone source repository then will access
the same from ceph client .
Regards
Prabu
On Mon, 15 Jun 2015 17:16:11 +0530 gjprabu lt;gjpr...@zohocorp.comgt;
wrote
Hi
The size differ issue is solved, This is related to ocfs2 format option and
-C
removing a rbd image ?
If you are removing entire pool , that should be fast and do deletes data
async way I guess.
Thanks amp; Regards
Somnath
From: gjprabu [mailto:gjpr...@zohocorp.com]
Sent: Thursday, June 11, 2015 6:38 AM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com; Kamala
...@lists.ceph.com] On Behalf Of
gjprabu
Sent: Friday, June 05, 2015 3:07 AM
To: ceph-users@lists.ceph.com
Cc: Kamala Subramani; Siva Sokkumuthu
Subject: [ceph-users] Ceph OSD with OCFS2
Dear Team,
We are newly using ceph with two OSD and two clients. Both clients are
mounted
Dear Team,
We are newly using ceph with two OSD and two clients. Both clients are
mounted with OCFS2 file system. Here suppose i transfer 500MB of data in the
client its showing double of the size 1GB after finish data transfer. Is the
behavior is correct or is there any solution for
Hi Team,
We are newly using ceph with two OSD and two clients, our requirement is
when we write date through clients it should see in another client also,
storage is mounted using rbd because we running git clone with large amount of
small file and it is fast when use rbd mount, but
On 2015-06-02T15:40:54, gjprabu lt;gjpr...@zohocorp.comgt; wrote:
gt; Hi Team,
gt;
gt; We are newly using ceph with two OSD and two clients, our requirement is
when we write date through clients it should see in another client also,
storage is mounted using rbd because we running git
NFS or Samba shares.
The exact setup depends on what you need.
Cheers, Vasily.
On Thu, May 21, 2015 at 6:47 PM, gjprabu lt;gjpr...@zohocorp.comgt; wrote:
Hi Angapov,
I have seen below message in ceph official sites. How it's consider to use in
production.
ImportantCephFS currently lacks
Hi All,
We are using rbd and map the same rbd image to the rbd device on two
different client but i can't see the data until i umount and mount -a
partition. Kindly share the solution for this issue.
Example
create rbd image named foo
map foo to /dev/rbd0 on server A, mount /dev/rbd0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
91 matches
Mail list logo