Re: [ceph-users] CoreOS Cluster of 7 machines and Ceph

2016-06-03 Thread Michael Shuey
Sorry for the late reply - been traveling. I'm doing exactly that right now, using the ceph-docker container. It's just in my test rack for now, but hardware arrived this week to seed the production version. I'm using separate containers for each daemon, including a container for each OSD. I've

Re: [ceph-users] Crashing OSDs (suicide timeout, following a single pool)

2016-06-03 Thread Adam Tygart
With regards to this export/import process, I've been exporting a pg from an osd for more than 24 hours now. The entire OSD only has 8.6GB of data. 3GB of that is in omap. The export for this particular PG is only 108MB in size right now, after more than 24 hours. How is it possible that a

Re: [ceph-users] jewel upgrade and sortbitwise

2016-06-03 Thread Francois Lafont
Hi, On 03/06/2016 16:29, Samuel Just wrote: > Sorry, I should have been more clear. The bug actually is due to a > difference in an on disk encoding from hammer. An infernalis cluster would > never had had such encodings and is fine. Ah ok, fine. ;) Thanks for the answer. Bye. -- François

Re: [ceph-users] Required maintenance for upgraded CephFS filesystems

2016-06-03 Thread Scottix
Great thanks. --Scott On Fri, Jun 3, 2016 at 8:59 AM John Spray wrote: > On Fri, Jun 3, 2016 at 4:49 PM, Scottix wrote: > > Is there anyway to check what it is currently using? > > Since Firefly, the MDS rewrites TMAPs to OMAPs whenever a directory is >

Re: [ceph-users] Crashing OSDs (suicide timeout, following a single pool)

2016-06-03 Thread Brandon Morris, PMP
Nice catch. That was a copy-paste error. Sorry it should have read: 3. Flush the journal and export the primary version of the PG. This took 1 minute on a well-behaved PG and 4 hours on the misbehaving PG i.e. ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-16 --journal-path

Re: [ceph-users] what does the 'rbd watch ' mean?

2016-06-03 Thread Jason Dillaman
That command is used for debugging to show the notifications sent by librbd whenever image properties change. These notifications are used by other librbd clients with the same image open to synchronize state (e.g. a snapshot was created so instruct the other librbd client to refresh the image's

Re: [ceph-users] jewel upgrade and sortbitwise

2016-06-03 Thread Francois Lafont
Hi, On 03/06/2016 05:39, Samuel Just wrote: > Due to http://tracker.ceph.com/issues/16113, it would be best to avoid > setting the sortbitwise flag on jewel clusters upgraded from previous > versions until we get a point release out with a fix. > > The symptom is that setting the sortbitwise

Re: [ceph-users] Infernalis => Jewel: ceph-fuse regression concerning the automatic mount at boot?

2016-06-03 Thread Francois Lafont
Hi, On 02/06/2016 04:44, Francois Lafont wrote: > ~# grep ceph /etc/fstab > id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring,client_mountpoint=/ > /mnt/ fuse.ceph noatime,nonempty,defaults,_netdev 0 0 [...] > And I have rebooted. After the reboot, big surprise with this: > > ~# cat

Re: [ceph-users] Problems with Calamari setup

2016-06-03 Thread fridifree
I'll check it out Thank you On Jun 2, 2016 11:46 PM, "Michael Kuriger" wrote: > For me, this same issue was caused by having too new a version of salt. > I’m running salt-2014.1.5-1 in centos 7.2, so yours will probably be > different. But I thought it was worth mentioning. > > >

Re: [ceph-users] mount error 5 = Input/output error (kernel driver)

2016-06-03 Thread John Spray
On Mon, May 30, 2016 at 8:33 PM, Ilya Dryomov wrote: > On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote: >> Hallo, >> in my OpenStack Mitaka, I have installed the additional service "Manila" >> with a CephFS backend. Everything is working. All shares

[ceph-users] Required maintenance for upgraded CephFS filesystems

2016-06-03 Thread John Spray
Hi, If you do not have a CephFS filesystem that was created with a Ceph version older than Firefly, then you can ignore this message. If you have such a filesystem, you need to run a special command at some point while you are using Jewel, but before upgrading to future versions. Please see the

Re: [ceph-users] CephFS: slow writes over NFS when fs is mounted with kernel driver but fast with Fuse

2016-06-03 Thread Jan Schermer
It should be noted that using "async" with NFS _will_ corrupt your data if anything happens. It's ok-ish for something like an image library, but it's most certainly not OK for VM drives, databases, or if you write any kind of binary blobs that you can't recreate. If ceph-fuse is fast (you

[ceph-users] what does the 'rbd watch ' mean?

2016-06-03 Thread dingx...@hotmail.com
everyone: hi I am writing a doc for rbd command .when I use the command “rbd watch ” .it can only display as follows: When I create snap、delete snap、lock image、protect snap and unprotect snap ,it changes like this: So I do not know how to use this command and What is this command