Cephfs can use fscache. I am testing it at the moment.

Some lines from my deployment process:

sudo apt-get install linux-generic-lts-utopic cachefilesd
sudo reboot
sudo mkdir /mnt/cephfs
sudo mkdir /mnt/ceph_cache
sudo mkfs -t xfs /dev/md3 # A 100gb local raid partition
sudo bash -c "echo /dev/md/3 /mnt/ceph_cache xfs defaults,noatime 0 0 >> 
/etc/fstab"
sudo bash -c "echo M1:6789,M2:6789:/ /mnt/cephfs ceph 
name=fsuser,secretfile=/etc/ceph/fsuser.secret,noatime,fsc,_netdev 0 0 >> 
/etc/fstab"
sudo bash -c "echo REDACTED > /etc/ceph/fsuser.secret"
sudo chmod 400 /etc/ceph/fsuser.secret
sudo mount /mnt/ceph_cache/
sudo sed -i 's/#RUN=yes/RUN=yes/g' /etc/default/cachefilesd
sudo vim /etc/cachefilesd.conf #Change dir to /mnt/ceph_cache, tag to 
ceph_cache and comment everything else
sudo service cachefilesd start
sudo mount /mnt/cephfs

Cheers, Les

On 04.09.2015 00:58, Kyle Hutson wrote:
I was wondering if anybody could give me some insight as to how CephFS does its 
caching - read-caching in particular.

We are using CephFS with an EC pool on the backend with a replicated cache pool 
in front of it. We're seeing some very slow read times. Trying to compute an 
md5sum on a 15GB file twice in a row (so it should be in cache) takes the time 
from 23 minutes down to 17 minutes, but this is over a 10Gbps network and with 
a crap-ton of OSDs (over 300), so I would expect it to be down in the 2-3 
minute range.

I'm just trying to figure out what we can do to increase the performance. I 
have over 300 TB of live data that I have to be careful with, though, so I have 
to have some level of caution.

Is there some other caching we can do (client-side or server-side) that might 
give us a decent performance boost?



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to