Re: [ceph-users] v0.80.11 Firefly released

2015-11-20 Thread Loic Dachary
Hi, On 20/11/2015 02:13, Yonghua Peng wrote: > I have been using firefly release. is there an official documentation for > upgrading? thanks. Here it is : http://docs.ceph.com/docs/firefly/install/upgrading-ceph/ Enjoy ! > > > On 2015/11/20 6:08, Sage Weil wrote: >> This is a bugfix release

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-20 Thread HEWLETT, Paul (Paul)
Flushing a GPT partition table using dd does not work as the table is duplicated at the end of the disk as well Use the sgdisk –Z command Paul From: ceph-users mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Mykola mailto:mykola.dvor...@gmail.com>> Date: Thursday, 19 November 2015 at

Re: [ceph-users] After flattening the children image, snapshot still can not be unprotected

2015-11-20 Thread Jackie
Yes, It seems the problem was caused by cloning image without deep-flatten feature. 在 2015年11月19日 23:13, Jason Dillaman 写道: Does child image "images/0a38b10d-2184-40fc-82b8-8bbd459d62d2" have snapshots? ___ ceph-users mailing list ceph-users@lists.

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-20 Thread German Anders
Paul, thanks for the reply, I try to run the command and then run again the ceph-deploy command again with the same parameters, but I'm getting the same error msg. I try with XFS and run ok, so the problem is with the btrfs *German* 2015-11-20 6:08 GMT-03:00 HEWLETT, Paul (Paul) < paul.hewl...@a

[ceph-users] upgrading 0.94.5 to 9.2.0 notes

2015-11-20 Thread Kenneth Waegeman
Hi, I recently started a test to upgrade ceph from 0.94.5 to 9.2.0 on Centos7. I had some issues not mentioned in the release notes. Hereby some notes: * Upgrading instructions are only in the release notes, not updated on the upgrade page in the docs: http://docs.ceph.com/docs/master/insta

[ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread German Anders
Hi all, I've finished the install of a new ceph cluster with infernalis 9.2.0 release. But I'm getting the following error msg: $ ceph -w cluster 29xx-3xxx-xxx9-xxx7-b8xx health HEALTH_WARN 64 pgs degraded 64 pgs stale 64 pgs stuck degraded

Re: [ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread Gregory Farnum
This usually means your crush mapping for the pool in question is unsatisfiable. Check what the rule is doing. -Greg On Friday, November 20, 2015, German Anders wrote: > Hi all, I've finished the install of a new ceph cluster with infernalis > 9.2.0 release. But I'm getting the following error m

[ceph-users] High load during recovery (after disk placement)

2015-11-20 Thread Simon Engelsman
Hi, We've experienced a very weird problem last week with our Ceph cluster. We would like to ask your opinion(s) and advice Our dedicated Ceph OSD nodes run with: Total platform - IO Average: 2500 wrps, ~ 600 rps - Replica's: 3x 2 pools: - SSD (~50x1TB) - Spinner (~36x2TB) - 1024 PG's per pool

Re: [ceph-users] High load during recovery (after disk placement)

2015-11-20 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 We are seeing some of these issues as well, here is some things we have learned. We found in our testing that enabling DISCARD as a mount option on the SSD OSDs did not measurably affect performance (be sure to test on your SSDs to be sure though).

Re: [ceph-users] ceph infernalis pg creating forever

2015-11-20 Thread German Anders
Here's the actual crush map: $ cat /home/ceph/actual_map.out # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 de

Re: [ceph-users] Can't activate osd in infernalis

2015-11-20 Thread Steve Anthony
I had a similar issue when upgrading. When I originally created by journal partitions, I never set the partition type GUID to the Ceph Journal GUID (https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs). Instead it was set as "basic data partition". Pre-Infernalis this wasn't a p

Re: [ceph-users] upgrading 0.94.5 to 9.2.0 notes

2015-11-20 Thread Steve Anthony
On journal device permissions see my reply in "Can't activate osd in infernalis". Basically, if you set the partition type GUID to 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (the Ceph journal type GUID), the existing Ceph udev rules will set permissions on the partitions correctly at boot. Changing the

[ceph-users] librbd - threads grow with each Image object

2015-11-20 Thread Allen Liao
I am developing a python application (using rbd.py) that requires querying information about tens of thousands of rbd images. I have noticed that the number of threads in my process grow linearly with each Image object that is created. After creating about 800 Image objects (that all share a sing

[ceph-users] Ceph-fuse single read limitation?‏‏

2015-11-20 Thread Z Zhang
Hi Guys, Now we have a very small cluster with 3 OSDs but using 40Gb NIC. We use ceph-fuse as cephfs client and enable readahead, but testing single reading a large file from cephfs via fio, dd or cp can only achieve ~70+MB/s, even if fio or dd's block size is set to 1MB or 4MB. From the cep

Re: [ceph-users] librbd - threads grow with each Image object

2015-11-20 Thread Haomai Wang
What's your ceph version? Do you enable rbd cache? By default, each Image should only have one extra thread(maybe we also should obsolete this?). On Sat, Nov 21, 2015 at 9:26 AM, Allen Liao wrote: > I am developing a python application (using rbd.py) that requires querying > information about te

Re: [ceph-users] librbd - threads grow with each Image object

2015-11-20 Thread Allen Liao
I'm on 0.94.5. No, rbd cache is not enabled. Even if each Image creates only one extra thread, if I have tens of thousands of Image objects open, there will be tens of thousands of threads in my process. Practically speaking, am I not allowed to cache Image objects? On Fri, Nov 20, 2015 at 8:24