Hi!
Maybe an FAQ, but is encryption of data available (or will be available)
in ceph at a storage level?
Thanks,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Sage,
What kernel version of this? It looks like an old kernel bug.
Generally speaking you should be using 3.4 at the very least if you
are using the kernel client. sage
This is the standard Wheezy kernel, i.e. 3.2.0-4-amd64
While I can recompile the kernel, I don't think would be
Hi all,
my Debian 7 wheezy machine died with the following in the logs:
http://pastebin.ubuntu.com/5981058/
It's using kvm and ceph as an rdb device.
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
Can you give me please some advices?
Thanks,
Giuseppe
Hi John,
apologies for the late reply. The librados seems quite interesting ...
Actually no. I'll write up an API doc for you soon.
sudo apt-get install python-ceph
import rados
I wonder if I can ake python calls to interact with the object store
(say: cephfs.open() mkdir() ) directly
Hi Greg,
just for your own information, ceph mds newfs has disappeared from the
help screen of the ceph command and it was a nightmare to understand
the syntax (that has changed)... luckily sources were there :)
For the flight log:
ceph mds newfs metadata pool id data pool id
... and BTW, I know it's my fault that I haven't done the mds newfs, but
I think it would be better to print an error rather that going in core
dump with a trace.
Just my eur 0.02 :)
Cheers,
Giuseppe
___
ceph-users mailing list
Hi!
I've got a cluster of two nodes on Ubuntu 12.04 with cuttlefish from the
ceph.com repo.
ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
The MDS process is dying after a while with a stack trace, but I can't
understand why.
I reproduced the same problem on debian 7 with the