Not sure if you have been helped, but this is know issue if you have many
files/subfolder. It depends on what cephFS version you are running. This should
have been resolved in the Red Hat version 3 of ceph which is based on Luminous.
http://tracker.ceph.com/issues/19438
David, few inputs based on my working experience on cephFS. Might or might not
be relevant to the current issue seen in your cluster.
1. Create Metadata pool on NVMe. Folks can claim not needed, but I have seen
worst perf when on HDD though the Metadata size is very small.
2. In cephFS,
>> rm: cannot remove '/design/4695/8/6-50kb.jpg': No space left on device
“No space left on device” issue typically in ceph FS might be caused if you
have files > 1million(10) in “single directory”. To mitigate this try
increasing the "mds_bal_fragment_size_max" to a higher value, example 7
Josef, my comments based on experience with cephFS(Jewel with 1MDS)
community(free) version.
* cephFS(Jewel) considering 1 MDS(stable) performs horrible with "small million
KB size files", even after MDS cache, dir frag tuning etc.
* cephFS(Jewel) considering 1 MDS(stable) performs great for
PROD.
http://docs.ceph.com/docs/master/releases/
--
Deepak
-Original Message-
From: Henrik Korkuc [mailto:li...@kirneh.eu]
Sent: Wednesday, September 06, 2017 10:50 PM
To: Deepak Naidu; Sage Weil; ceph-de...@vger.kernel.org;
ceph-maintain...@ceph.com; ceph-us...@ceph.com
Subject: Re: [ceph
Hope collective feedback helps. So here's one.
>>- Not a lot of people seem to run the "odd" releases (e.g., infernalis,
>>kraken).
I think the more obvious reason companies/users wanting to use CEPH will stick
with LTS versions as it models the 3yr support cycle.
>>* Drop the odd releases,
Note sure how often does the http://docs.ceph.com/docs/master/releases/ gets
updated, timeline roadmap helps.
--
Deepak
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Abhishek Lekshmanan
Sent: Tuesday, August 29, 2017 11:20 AM
To:
For permanent fix, you need to fix this using patched kernel or upgrade to 4.9
kernel or higher(which has the patch fix) http://tracker.ceph.com/issues/17191
Using [mds] allow r gives users “read” permission to “/” share ie any
directory/files under “/” , Example “/dir1”,”dir2” or “/MTY” can
Based on my experience, it's really stable and yes is production ready. Most of
the use case for cephFS depends on what your trying to achieve. Few feedbacks.
1) Kernel client is nice/stable and can achieve higher bandwidth if you have
40G or higher network.
2) ceph-fuse is very slow, as the
tomatically.
Shameless plug: you can find an example in this blog post
http://www.root314.com/2017/01/15/Ceph-storage-tiers/#tiered-crushmap I hope it
helps
Cheers,
Maxime
On Sat, 1 Jul 2017 at 03:28 Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
OK, so
Sorry for the spam, but more clear way of doing custom crushmap
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/038835.html
--
Deepak
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Deepak
Naidu
Sent: Friday, June 30, 2017 7:22 PM
To: David Turner; ceph
OK, so looks like its ceph crushmap behavior
http://docs.ceph.com/docs/master/rados/operations/crush-map/
--
Deepak
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Deepak
Naidu
Sent: Friday, June 30, 2017 7:06 PM
To: David Turner; ceph-users@lists.ceph.com
Subject: Re
up 1.0
1.0
39.09380 osd.3 up 1.0
1.0
--
Deepak
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Friday, June 30, 2017 6:36 PM
To: Deepak Naidu; ceph-users@lists.ceph.com
Subject: Re: [ceph
Hello,
I am getting the below error and I am unable to get them resolved even after
starting and stopping the OSD's. All the OSD's seems to be up.
How do I repair the OSD's or fix them manually. I am using cephFS. But oddly
the ceph df is showing 100% used(which is showing in KB). But the pool
/var/lib/ceph/osd/ceph-##/journal is pointing to the proper journal before
starting it back up.
Unless you REALLY NEED a 70GB journal partition... don't do it.
On Mon, Jun 12, 2017 at 1:07 AM Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Hello folks,
I am try
Hello folks,
I am trying to use an entire ssd partition for journal disk ie example
/dev/sdf1 partition(70GB). But when I look up the osd config using below
command I see ceph-deploy sets journal_size as 5GB. More confusing, I see the
OSD logs showing the correct size in blocks in the
Thanks David for sharing your experience, appreciate it.
--
Deepak
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Friday, June 09, 2017 5:38 AM
To: Deepak Naidu; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD node type/count mixes in the cluster
I ran a cluster with 2
Wanted to check if anyone has a ceph cluster which has mixed vendor servers
both with same disk size i.e. 8TB but different count i.e. Example 10 OSD
servers from Dell with 60 Disk per server and other 10 OSD servers from HP with
26 Disk per server.
If so does that change any performance
day, June 01, 2017 2:05 PM
To: Deepak Naidu; ceph-users
Subject: Re: [ceph-users] Crushmap from Rack aware to Node aware
If all 6 racks are tagged for Ceph storage nodes, I'd go ahead and just put the
nodes in there now and configure the crush map accordingly. That way you can
grow each of the
Perfect David for detailed explanation. Appreciate it!.
In my case I have 10 OSD servers with each 60 Disks(ya I know…) ie total 600
OSD and I have 3 racks to spare.
--
Deepak
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Thursday, June 01, 2017 12:23 PM
To: Deepak Naidu; ceph-users
Greetings Folks.
Wanted to understand how ceph works when we start with rack aware(rack level
replica) example 3 racks and 3 replica in crushmap in future is replaced by
node aware(node level replica) ie 3 replica spread across nodes.
This can be vice-versa. If this happens. How does ceph
It's better
to configure firewall on ceph hosts as well to prevent extra-subnets
communications.
In theory it should work but can't say much on how stable would it be.
Best regards,
Vladimir
2017-05-26 20:36 GMT+05:00 Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>>:
Hi V
Thanks David.
>>Every single one of the above needs to be able to access all of the mons and
>>osds. I don't think you can have multiple subnets for this,
Yes that's why this multi tenancy question
>>but you can do this via routing. Say your private osd network is
>>xxx.xxx.10.0, your public
Hi Vlad,
Thanks for chiming in.
>>It's not clear what you want to achieve from the ceph point of view?
Multiple tenancy. We will have multiple tenants from different isolated
subnet/network accessing single ceph cluster which can support multiple
tenants. The only problem I see with ceph in a
I am trying to gather and understand on how can or has multitenancy solved for
network interfaces or isolation. I can get ceph under a virtualized env and
achieve the isolation but my question or though is more on the physical ceph
deployment.
Is there a way, we can have multiple
Yes via creates a swap file and nano doesn’t. But when I try fio to write, I
don’t see this happening.
--
Deepak
From: Chris Sarginson [mailto:csarg...@gmail.com]
Sent: Thursday, April 13, 2017 2:26 PM
To: Deepak Naidu; ceph-users
Subject: Re: [ceph-users] saving file on cephFS mount using vi
Ok, I tried strace to check why vi slows or pauses. It seems to slow on fsync(3)
I didn't see the issue with nano editor.
--
Deepak
From: Deepak Naidu
Sent: Wednesday, April 12, 2017 2:18 PM
To: 'ceph-users'
Subject: saving file on cephFS mount using vi takes pause/time
Folks,
This is bit
Folks,
This is bit weird issue. I am using the cephFS volume to read write files etc
its quick less than seconds. But when editing a the file on cephFS volume using
vi , when saving the file the save takes couple of seconds something like
sync(flush). The same doesn't happen on local
the files are deleted.
Currently set to
"rgw_gc_max_objs": "97",
--
Deepak
From: Deepak Naidu
Sent: Wednesday, April 05, 2017 2:56 PM
To: Ben Hines
Cc: ceph-users
Subject: RE: [ceph-users] ceph df space for rgw.buckets.data shows used even
when files are deleted
Thanks Ben.
Hmm pretty odd. When I ran test on ceph kernel vs ceph fuse , FUSE has always
been slower for both read and writes.
Try using tools like fio to run IO test with directIO(bypass system cache)
--
Deepak
On Apr 8, 2017, at 4:50 PM, Kyle Drake
>
",
--
Deepak
From: Ben Hines [mailto:bhi...@gmail.com]
Sent: Wednesday, April 05, 2017 2:41 PM
To: Deepak Naidu
Cc: ceph-users
Subject: Re: [ceph-users] ceph df space for rgw.buckets.data shows used even
when files are deleted
Ceph's RadosGW uses garbage collection by default.
Try running '
Folks,
Trying to test the S3 object GW. When I try to upload any files the space is
shown used(that's normal behavior), but when the object is deleted it shows as
used(don't understand this). Below example.
Currently there is no files in the entire S3 bucket, but it still shows space
used.
Hi John, any idea on what's wrong. Any info is appreciated.
--
Deepak
-Original Message-
From: Deepak Naidu
Sent: Thursday, March 23, 2017 2:20 PM
To: John Spray
Cc: ceph-users
Subject: RE: [ceph-users] How to mount different ceph FS using ceph-fuse or
kernel cephfs mount
Fixing typo
I have cephFS cluster. Below is the df from a client node.
Question is why does the df command when mounted using ceph-fuse or ceph-kernel
mount shows "used space" when there is nothing used(empty -- no files or
directories)
[root@storage ~]# df -h
Filesystem
Fixing typo
>>>> What version of ceph-fuse?
ceph-fuse-10.2.6-0.el7.x86_64
--
Deepak
-Original Message-
From: Deepak Naidu
Sent: Thursday, March 23, 2017 9:49 AM
To: John Spray
Cc: ceph-users
Subject: Re: [ceph-users] How to mount different ceph FS using ceph-fuse or
k
RDMA is of interest to me. So my below comment.
>> What surprised me is that the result of RDMA mode is almost the same as the
>> basic mode, the iops, latency, throughput, etc.
Pardon my knowledge here. If I read your ceph.conf and your notes. It seems
that you are using RDMA only for
>> What version of ceph-fuse?
I have ceph-common-10.2.6-0 ( on CentOS 7.3.1611)
--
Deepak
>> On Mar 23, 2017, at 6:28 AM, John Spray <jsp...@redhat.com> wrote:
>>
>> On Wed, Mar 22, 2017 at 3:30 PM, Deepak Naidu <dna...@nvidia.com> wrote:
>> Hi Joh
6:16 AM
To: Deepak Naidu
Cc: ceph-users
Subject: Re: [ceph-users] How to mount different ceph FS using ceph-fuse or
kernel cephfs mount
On Tue, Mar 21, 2017 at 5:31 PM, Deepak Naidu
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
> Greetings,
>
>
>
> I have b
Thanks Brad
--
Deepak
> On Mar 21, 2017, at 9:31 PM, Brad Hubbard <bhubb...@redhat.com> wrote:
>
>> On Wed, Mar 22, 2017 at 10:55 AM, Deepak Naidu <dna...@nvidia.com> wrote:
>> Do we know which version of ceph client does this bug has a fix. Bug:
>>
Do we know which version of ceph client does this bug has a fix. Bug:
http://tracker.ceph.com/issues/17191
I have ceph-common-10.2.6-0 ( on CentOS 7.3.1611) & ceph-fs-common-
10.2.6-1(Ubuntu 14.04.5)
--
Deepak
---
Greetings,
I have below two cephFS "volumes/filesystem" created on my ceph cluster. Yes I
used the "enable_multiple" flag to enable the multiple cephFS feature. My
question
1) How do I mention the fs name ie dataX or data1 during cephFS mount
either using kernel mount of ceph-fuse
Not sure, if this is still true with Jewel CephFS ie
cephfs does not support any type of quota, df always reports entire cluster
size.
https://www.spinics.net/lists/ceph-users/msg05623.html
--
Deepak
From: Deepak Naidu
Sent: Thursday, March 16, 2017 6:19 PM
To: 'ceph-users'
Subject: CephFS
Greetings,
I am trying to build a CephFS system. Currently I have created my crush map
which uses only certain OSD & I have pools created out from them. But when I
mount the cephFS the mount size is my entire ceph cluster size, how is that ?
Ceph cluster & pools
[ceph-admin@storageAdmin ~]$
Ok, I found this tutorial on crushmap from han. Hopefully I should get my
structure accomplished using crushmap.
https://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
--
Deepak
From: Deepak Naidu
Sent: Wednesday, March 15, 2017 12:45 PM
To: ceph-users
Subject: Creating
Hello,
I am trying to address the failure domain & performance/isolation of pools
based on what OSD they can belong to. Let me give example. Can I achieve this
with crurshmap ruleset or any other method, if so how?
Example:
10x storage servers each have 3x OSD ie OSD.0 through OSD29 --
>> because Jewel will be retired:
Hmm. Isn't Jewel LTS ?
Every other stable releases is a LTS (Long Term Stable) and will receive
updates until two LTS are published.
--
Deepak
> On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo wrote:
>
> It may be probably kind of challenge
I had similar issue when using older version of ceph-deploy. I see the URL
got.ceph.com doesn't work on browser as well.
To resolve this, I installed the latest version of ceph-deploy and it worked
fine. New version wasn't using git.ceph.com.
During ceph-deploy you can mention what version of
pth & see what’s your iostat looking, if it’s
same then that’s what your disk can do.
Now if you want to compare ceph RDB perf. Do the same on a normal block device.
--
Deepak
From: Matteo Dacrema [mailto:mdacr...@enter.eu]
Sent: Tuesday, March 07, 2017 1:17 PM
To: Deepak Naidu
Cc: ceph-users
My response is without any context to ceph or any SDS, purely how to check the
IO bottleneck. You can then determine if its Ceph or any other process or disk.
>> MySQL can reach only 150 iops both read or writes showing 30% of IOwait.
Lower IOPS is not issue with itself as your block size
Folks,
Has anyone been using Bluestore with CephFS. If so, did you'll test with
zetascale vs rocksdb. Any install steps/best practice is appreciated.
PS: I still see that Bluestore is "experimental feature" any timeline, when
will it be GA stable.
--
Deepak
50 matches
Mail list logo