Sorry for the late reply - been traveling.
I'm doing exactly that right now, using the ceph-docker container.
It's just in my test rack for now, but hardware arrived this week to
seed the production version.
I'm using separate containers for each daemon, including a container
for each OSD. I've
With regards to this export/import process, I've been exporting a pg
from an osd for more than 24 hours now. The entire OSD only has 8.6GB
of data. 3GB of that is in omap. The export for this particular PG is
only 108MB in size right now, after more than 24 hours. How is it
possible that a fragment
Hi,
On 03/06/2016 16:29, Samuel Just wrote:
> Sorry, I should have been more clear. The bug actually is due to a
> difference in an on disk encoding from hammer. An infernalis cluster would
> never had had such encodings and is fine.
Ah ok, fine. ;)
Thanks for the answer.
Bye.
--
François Lafo
Great thanks.
--Scott
On Fri, Jun 3, 2016 at 8:59 AM John Spray wrote:
> On Fri, Jun 3, 2016 at 4:49 PM, Scottix wrote:
> > Is there anyway to check what it is currently using?
>
> Since Firefly, the MDS rewrites TMAPs to OMAPs whenever a directory is
> updated, so a pre-firefly filesystem mig
Nice catch. That was a copy-paste error. Sorry
it should have read:
3. Flush the journal and export the primary version of the PG. This took
1 minute on a well-behaved PG and 4 hours on the misbehaving PG
i.e. ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-16
--journal-path /va
On Fri, Jun 3, 2016 at 4:49 PM, Scottix wrote:
> Is there anyway to check what it is currently using?
Since Firefly, the MDS rewrites TMAPs to OMAPs whenever a directory is
updated, so a pre-firefly filesystem might already be all OMAPs, or
might still have some TMAPs -- there's no way to know wi
Is there any way we could have a "leveldb_defrag_on_mount" option for
the osds similar to the "leveldb_compact_on_mount" option?
Also, I've got at least one user that is creating and deleting
thousands of files at a time in some of their directories (keeping
1-2% of them). Could that cause this fr
Is there anyway to check what it is currently using?
Best,
Scott
On Fri, Jun 3, 2016 at 4:26 AM John Spray wrote:
> Hi,
>
> If you do not have a CephFS filesystem that was created with a Ceph
> version older than Firefly, then you can ignore this message.
>
> If you have such a filesystem, you
I'm hoping to implement cephfs in production at some point this year so I'd
be interested to hear your progress on this.
Have you considered SSD for your metadata pool? You wouldn't need loads of
capacity although even with reliable SSD I'd probably still do x3
replication for metadata. I've been
I'd be worried about it getting "fast" all of sudden. Test crash consistency.
If you test something like file creation you should be able to estimate if it
should be that fast. (So it should be some fraction of theoretical IOPS on the
drives/backing rbd device...)
If it's too fast then maybe the
Sorry, I should have been more clear. The bug actually is due to a
difference in an on disk encoding from hammer. An infernalis cluster would
never had had such encodings and is fine.
-Sam
On Jun 3, 2016 6:53 AM, "Francois Lafont" wrote:
> Hi,
>
> On 03/06/2016 05:39, Samuel Just wrote:
>
> > D
Zheng, thanks for looking into this, it makes sense although strangely I've
set up a new nfs server (different hardware, same OS, Kernel etc.) and I'm
unable to recreate the issue. I'm no longer getting the delay, the nfs
export is still using sync. I'm now comparing the servers to see what's
diffe
That command is used for debugging to show the notifications sent by librbd
whenever image properties change. These notifications are used by other
librbd clients with the same image open to synchronize state (e.g. a
snapshot was created so instruct the other librbd client to refresh the
image's h
Hi,
On 03/06/2016 05:39, Samuel Just wrote:
> Due to http://tracker.ceph.com/issues/16113, it would be best to avoid
> setting the sortbitwise flag on jewel clusters upgraded from previous
> versions until we get a point release out with a fix.
>
> The symptom is that setting the sortbitwise fl
Hi,
On 02/06/2016 04:44, Francois Lafont wrote:
> ~# grep ceph /etc/fstab
> id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring,client_mountpoint=/
> /mnt/ fuse.ceph noatime,nonempty,defaults,_netdev 0 0
[...]
> And I have rebooted. After the reboot, big surprise with this:
>
> ~# cat /tmp
I'll check it out
Thank you
On Jun 2, 2016 11:46 PM, "Michael Kuriger" wrote:
> For me, this same issue was caused by having too new a version of salt.
> I’m running salt-2014.1.5-1 in centos 7.2, so yours will probably be
> different. But I thought it was worth mentioning.
>
>
>
>
>
> [image: y
On Mon, May 30, 2016 at 8:33 PM, Ilya Dryomov wrote:
> On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote:
>> Hallo,
>> in my OpenStack Mitaka, I have installed the additional service "Manila"
>> with a CephFS backend. Everything is working. All shares are created
>> successfully:
>>
>> mani
Hi,
If you do not have a CephFS filesystem that was created with a Ceph
version older than Firefly, then you can ignore this message.
If you have such a filesystem, you need to run a special command at
some point while you are using Jewel, but before upgrading to future
versions. Please see the
It should be noted that using "async" with NFS _will_ corrupt your data if
anything happens.
It's ok-ish for something like an image library, but it's most certainly not OK
for VM drives, databases, or if you write any kind of binary blobs that you
can't recreate.
If ceph-fuse is fast (you are
19 matches
Mail list logo