Re: CephFS use cases + MDS limitations

2013-11-05 Thread Malcolm Haak
ope my wall of text makes sense. Please feel free to ping me with questions. Regards Malcolm Haak On 04/11/13 09:53, Michael Sevilla wrote: Hi Ceph community, I’d like to get a feel for some of the problems that CephFS users are encountering with single MDS deployments. There were requests

Re: writing a ceph cliente for MS windows

2013-11-07 Thread Malcolm Haak
I'm just going to throw these in there. http://www.acc.umu.se/~bosse/ They are GPLv2 some already use sockets and such from inside the kernel. Heck you might even be able to mod the HTTP one to use rados gateway. I don't know as I havent sat down and pulled them apart enough yet. They might

Re: HSM

2013-11-10 Thread Malcolm Haak
f extended metadata about whole objects, it could use the same interfaces as well. Hope that was acutually helpful and not just an obvious rehash... Regards Malcolm Haak On 09/11/13 18:33, Sage Weil wrote: The latest Lustre just added HSM support: http://archive.hpcwi

Re: HSM

2013-11-11 Thread Malcolm Haak
changelog, which Robinhood uses to replicate metadata into its MySQL database with all the indices that it wants. On Sun, Nov 10, 2013 at 11:17 PM, Malcolm Haak wrote: So there aren't really any hooks in that exports are triggered by the policy engine after a scan of the metadata, an

Re: HSM

2013-11-20 Thread Malcolm Haak
It is, except it might not be. Dmapi only works if you are the one in charge of the HSM and the filesystem. So for example in a DMF solution the filesystem mounted with DMAPI options is on your NFS head node. Your HSM solution is also installed there. Things get a bit more odd when you look

Re: [GIT PULL] Ceph updates and fixes for 3.13

2013-12-04 Thread Malcolm Haak
Hi Dave, This is a definite bug/regression. I've bumped into it as well. It's still in 3.13 -rc2 I've lodged a bug report on it. Regards Malcolm Haak On 24/11/13 19:59, Dave (Bob) wrote: I have just tried ceph 0.72.1 and kernel 3.13.0-rc1. There seems to be a problem

RBD Read performance

2013-04-17 Thread Malcolm Haak
fo do you want/where do I start hunting for my wumpus? Regards Malcolm Haak -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RBD Read performance

2013-04-18 Thread Malcolm Haak
Hi Mark! Thanks for the quick reply! I'll reply inline below. On 18/04/13 17:04, Mark Nelson wrote: On 04/17/2013 11:35 PM, Malcolm Haak wrote: Hi all, Hi Malcolm! I jumped into the IRC channel yesterday and they said to email ceph-devel. I have been having some read performance i

Re: RBD Read performance

2013-04-18 Thread Malcolm Haak
@dogbreath ~]# dd if=/todd-rbd-fs/DELETEME of=/dev/null bs=4M count=1 1+0 records in 1+0 records out 4194304 bytes (42 GB) copied, 316.025 s, 133 MB/s [root@dogbreath ~]# No change which is a shame. What other information or testing should I start? Regards Malcolm Haak On 18/04/13

Re: RBD Read performance

2013-04-18 Thread Malcolm Haak
dropping caches on the OSD's as well, but even if it was caching at the OSD end, the IB link is only QDR and we aren't doing RDMA so. Yeah..No idea what is going on here... On 19/04/13 10:40, Mark Nelson wrote: On 04/18/2013 07:27 PM, Malcolm Haak wrote: Morning all, Did the echos on

Re: RBD Read performance

2013-04-21 Thread Malcolm Haak
e pointers! Regards Malcolm Haak On 19/04/13 12:21, Malcolm Haak wrote: Ok this is getting interesting. rados -p bench 300 write --no-cleanup Total time run: 301.103933 Total writes made: 22477 Write size: 4194304 Bandwidth (MB/sec): 298.595 Stddev Bandwidth: