Re: [ceph-users] ceph and Fscache : can you kindly share your experiences?

2017-08-01 Thread Anish Gupta
Hello Webert,
Thank you for your response.
I am not interested in the SSD cache tier pool at all as that is on the Ceph 
Storage Cluster Server and is somewhat well documented/understood.
My question is regards enabling caching at the ceph clients that talk to the 
Ceph Storage Cluster.
thanks,Anish



On Tuesday, August 1, 2017, 9:55:39 AM PDT, Webert de Souza Lima 
 wrote:

Hi Anish, in case you're still interested, we're using cephfs in production 
since jewel 10.2.1.

I have a few similar clusters with some small set up variations. They're not so 
big but they're under heavy workload.
- 15~20 x 6TB HDD OSDs (5 per node), ~4 x 480GB SSD OSDs (2 per node, set for 
cache tier pool)- About 4 mount points per cluster, so I assume it translates 
to 4 clients per cluster- Running 10.2.9 on Ubuntu 4.4.0-24-generic now.
Cache Tiering is enabled for the CephFS on a separate pool that uses the SSDs 
as OSDs, if that's really what you wanna know.

Cya,

Regards,
Webert LimaDevOps Engineer at MAV TecnologiaBelo Horizonte - Brasil
On Mon, Jul 24, 2017 at 3:27 PM, Anish Gupta  wrote:

Hello,

Can you kindly share their experience with the  bulit-in FSCache support with 
ceph?
Interested in knowing the following:- Are you using FSCache in production 
environment?- How large is your Ceph deployment?- If with CephFS, how many Ceph 
clients are using FSCache- which version of Ceph and Linux kernel 

thank you.Anish



__ _
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph and Fscache : can you kindly share your experiences?

2017-07-24 Thread Anish Gupta
Hello,

Can you kindly share their experience with the  bulit-in FSCache support with 
ceph?
Interested in knowing the following:- Are you using FSCache in production 
environment?- How large is your Ceph deployment?- If with CephFS, how many Ceph 
clients are using FSCache- which version of Ceph and Linux kernel 

thank you.Anish


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Anish Gupta
Hello,
Can anyone share their experience with the  bulit-in FSCache support with or 
without CephFS?
Interested in knowing the following:- Are you using FSCache in production 
environment?- How large is your Ceph deployment?- If with CephFS, how many Ceph 
clients are using FSCache- which version of Ceph and Linux kernel 

thank you.Anish Gupta


On Wednesday, July 19, 2017, 6:06:57 AM PDT, Donny Davis  
wrote:

I had a corruption issue with the FUSE client on Jewel. I use CephFS for a 
samba share with a light load, and I was using the FUSE client. I had a power 
flap and didn't realize my UPS batteries had went bad so the MDS servers were 
cycled a couple times and some how the file system had become corrupted. I 
moved to the kernel client and after the FUSE experience I put it through 
horrible things. 
I had every client connected start copying over their user profiles, and then I 
started pulling and restarting MDS servers. I saw very few errors, and only 
blips in the copy processes. My experience with the kernel client has been very 
positive and I would say stable. Nothing replaces a solid backup copy of your 
data if you care about it. 
I am still currently on Jewel, and my CephFS is daily driven and I can barely 
notice that difference between it and the past setups I have had. 



On Wed, Jul 19, 2017 at 7:02 AM, Дмитрий Глушенок  wrote:

Unfortunately no. Using FUSE was discarded due to poor performance.

19 июля 2017 г., в 13:45, Blair Bethwaite  
написал(а):
Interesting. Any FUSE client data-points?

On 19 July 2017 at 20:21, Дмитрий Глушенок  wrote:

RBD (via krbd) was in action at the same time - no problems.

19 июля 2017 г., в 12:54, Blair Bethwaite 
написал(а):

It would be worthwhile repeating the first test (crashing/killing an
OSD host) again with just plain rados clients (e.g. rados bench)
and/or rbd. It's not clear whether your issue is specifically related
to CephFS or actually something else.

Cheers,

On 19 July 2017 at 19:32, Дмитрий Глушенок  wrote:

Hi,

I can share negative test results (on Jewel 10.2.6). All tests were
performed while actively writing to CephFS from single client (about 1300
MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and
metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at
all, active/standby.
- Crashing one node resulted in write hangs for 17 minutes. Repeating the
test resulted in CephFS hangs forever.
- Restarting active MDS resulted in successful failover to standby. Then,
after standby became active and the restarted MDS became standby the new
active was restarted. CephFS hanged for 12 minutes.

P.S. Planning to repeat the tests again on 10.2.7 or higher

19 июля 2017 г., в 6:47, 许雪寒  написал(а):

Is there anyone else willing to share some usage information of cephfs?
Could developers tell whether cephfs is a major effort in the whole ceph
development?

发件人: 许雪寒
发送时间: 2017年7月17日 11:00
收件人: ceph-users@lists.ceph.com
主题: How's cephfs going?

Hi, everyone.

We intend to use cephfs of Jewel version, however, we don’t know its status.
Is it production ready in Jewel? Does it still have lots of bugs? Is it a
major effort of the current ceph development? And who are using cephfs now?
__ _
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com


--
Dmitry Glushenok
Jet Infosystems


__ _
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com




--
Cheers,
~Blairo


--
Dmitry Glushenok
Jet Infosystems





-- 
Cheers,
~Blairo


--
Dmitry Glushenok
Jet Infosystems


__ _
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com