[ceph-users] Encryption

2013-09-30 Thread Giuseppe 'Gippa7; Paterno'
Hi!
Maybe an FAQ, but is encryption of data available (or will be available)
in ceph at a storage level?
Thanks,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Wheezy machine died with problems on osdmap

2013-08-14 Thread Giuseppe 'Gippa7; Paterno'
Hi Sage,
> What kernel version of this? It looks like an old kernel bug.
> Generally speaking you should be using 3.4 at the very least if you
> are using the kernel client. sage 
This is the standard Wheezy kernel, i.e. 3.2.0-4-amd64
While I can recompile the kernel, I don't think would be manageable
having a custom kernel in production.
Is there a way I can open a bug in debian asking for a backport of the
patch?
Thanks.
Regards,
Giuseppe


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Wheezy machine died with problems on osdmap

2013-08-13 Thread Giuseppe 'Gippa7; Paterno'
Hi all,
my Debian 7 wheezy machine died with the following in the logs:
http://pastebin.ubuntu.com/5981058/

It's using kvm and ceph as an rdb device.
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)

Can you give me please some advices?
Thanks,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Python APIs

2013-06-18 Thread Giuseppe 'Gippa7; Paterno'
Hi John,
apologies for the late reply. The librados seems quite interesting ...
> Actually no.  I'll write up an API doc for you soon.
>
> sudo apt-get install python-ceph
>
> import rados

I wonder if I can ake python calls to interact with the object store
(say: cephfs.open() mkdir() ) directly  without involving radosgw.
I guess the C libs are there if you can mount it using fuse ...
Thanks.
Cheers,
Giuseppe

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Giuseppe 'Gippa7; Paterno'
... and BTW, I know it's my fault that I haven't done the mds newfs, but
I think it would be better to print an error rather that going in core
dump with a trace.
Just my eur 0.02 :)
Cheers,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Giuseppe 'Gippa7; Paterno'
Hi Greg,
just for your own information, ceph mds newfs has disappeared from the
help screen of the "ceph" command and it was a nightmare to understand
the syntax (that has changed)... luckily sources were there :)

For the "flight log":
ceph mds newfs   --yes-i-really-mean-it

Cheers,
Gippa
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS dying on cuttlefish

2013-05-29 Thread Giuseppe 'Gippa7; Paterno'
Hi Greg,
> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at
> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html
> Although if you were re-creating pools and things, I think that would
> explain the crash you're seeing.
> -Greg
>
I was thinking about that  the problem is that with cuttlefish
(0.61.2) seems that the command is no longer there.
Has that moved?
Thanks,
Giuseppe

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS dying on cuttlefish

2013-05-28 Thread Giuseppe 'Gippa7; Paterno'
Hi Greg,
> Do I correctly assume that you don't have any CephFS data in the cluster yet?
The funny thing this was a fresh installation.
Just for your information, ceph-deploy didn't worked for me and I had to
do all the operations manually.
I recreated one of the two ceph clusters with bobtail, same config, and
worked like a charm immediately.

> If so, I'd just delete your current filesystem and metadata pool, then 
> recreate them.
> It should all be in the docs. :)
Although empty, I did the ceph-osd --mkfs again, but with no luck.
Also did the deletion of all the pools and recreated again (with a
simple ceph osd create )
Regards,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] MDS dying on cuttlefish

2013-05-23 Thread Giuseppe 'Gippa7; Paterno'
Hi!

I've got a cluster of two nodes on Ubuntu 12.04 with cuttlefish from the
ceph.com repo.
ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)

The MDS process is dying after a while with a stack trace, but I can't
understand why.
I reproduced the same problem on debian 7 with the same repository.

-3> 2013-05-23 23:00:42.957679 7fa39e28e700  1 --
10.123.200.189:6800/28919 <== osd.0 10.123.200.188:6802/27665 1 
osd_op_reply(5 200. [read 0~0] ack = -2 (No such file or
directory)) v4  111+0+0 (2261481792 0 0) 0x29afe00 con 0x29c4b00
-2> 2013-05-23 23:00:42.957780 7fa39e28e700  0 mds.0.journaler(ro)
error getting journal off disk
-1> 2013-05-23 23:00:42.960974 7fa39e28e700  1 --
10.123.200.189:6800/28919 <== osd.0 10.123.200.188:6802/27665 2 
osd_op_reply(1 mds0_inotable [read 0~0] ack = -2 (No such file or
directory)) v4  112+0+0 (1612134461 0 0) 0x2a1c200 con 0x29c4b00
 0> 2013-05-23 23:00:42.963326 7fa39e28e700 -1 mds/MDSTable.cc: In
function 'void MDSTable::load_2(int, ceph::bufferlist&, Context*)'
thread 7fa39e28e700 time 2013-05-23 23:00:42.961076
mds/MDSTable.cc: 150: FAILED assert(0)

 ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
 1: (MDSTable::load_2(int, ceph::buffer::list&, Context*)+0x3bb) [0x6dd2db]
 2: (Objecter::handle_osd_op_reply(MOSDOpReply*)+0xe1b) [0x7275bb]
 3: (MDS::handle_core_message(Message*)+0xae7) [0x513c57]
 4: (MDS::_dispatch(Message*)+0x33) [0x513d53]
 5: (MDS::ms_dispatch(Message*)+0xab) [0x515b3b]
 6: (DispatchQueue::entry()+0x393) [0x847ca3]
 7: (DispatchQueue::DispatchThread::entry()+0xd) [0x7caeed]
 8: (()+0x6b50) [0x7fa3a3376b50]
 9: (clone()+0x6d) [0x7fa3a1d24a7d]

Full logs here:
http://pastebin.com/C81g5jFd

I can't understand why and I'd really appreciate an hint.
Thanks!
Regards,
  Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com