[ceph-users] upgraded to Ubuntu 16.04, getting assert failure

2016-04-09 Thread Don Waterloo
I have a 6 osd system (w/ 3 mon, and 3 mds). it is running cephfs as part of its task. i have upgraded the 3 mon nodes to Ubuntu 16.04 and the bundled ceph 10.1.0-0ubuntu1. (upgraded from Ubuntu 15.10 with ceph 0.94.6-0ubuntu0.15.10.1). 2 of the mon nodes are happy and up. But the 3rd is giving

Re: [ceph-users] moving qcow2 image of a VM/guest (

2016-04-09 Thread Brian ::
If you have a qcow2 image on *local* type storage and move it to a ceph pool pmox will automatically convert the image to raw. Performance is entirely down to your particular setup - moving image to a ceph pool certainly won't guarantee performance increase - in fact the opposite could happen. Yo

Re: [ceph-users] Adding new disk/OSD to ceph cluster

2016-04-09 Thread lin zhou
some supplement: #2:ceph support heterogeneous nodes. #3.I think if you add an OSD by hand,you should set the `osd crush reweigth` to 0 first and then increase it to suit the disk size.and degrade the priority , thread of recover and backfill.just like this: osd_max_backfills 1 osd_recovery_max_a

Re: [ceph-users] moving qcow2 image of a VM/guest (

2016-04-09 Thread Lindsay Mathieson
On 9/04/2016 11:01 PM, Mad Th wrote: After this move, does the qcow2 image get converted to some raw or rbd file format? raw (rbd block) Will moving vm/quest images to ceph storage pool after converting qcow2 to raw format first , improve performance? I doubt it. We still see some i/o

[ceph-users] moving qcow2 image of a VM/guest (

2016-04-09 Thread Mad Th
I was reading that raw format is faster than qcow2. We have few vm/guest images in qcow2 which we have moved to ceph storage pool ( using proxmox GUI disk move ). After this move, does the qcow2 image get converted to some raw or rbd file format? Will moving vm/quest images to ceph storage po

Re: [ceph-users] Adding new disk/OSD to ceph cluster

2016-04-09 Thread ceph
Without knowing proxmox specific stuff .. #1: just create an OSD the regular way #2: it is safe; However, you may, either create a spoof (osd_crush_chooseleaf_type = 0), or underuse your cluster (osd_crush_chooseleaf_type = 1) On 09/04/2016 14:39, Mad Th wrote: > We have a 3 node proxmox/ceph clu

[ceph-users] Adding new disk/OSD to ceph cluster

2016-04-09 Thread Mad Th
We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks 1) If we want to add more disks , what are the things that we need to be careful about? Will the following steps automatically add it to ceph.conf? ceph-disk zap /dev/sd[X] pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] whe

Re: [ceph-users] 800TB - Ceph Physical Architecture Proposal

2016-04-09 Thread Burkhard Linke
Hi, *snipsnap* On 09.04.2016 08:11, Christian Balzer wrote: 3 MDS nodes: -SuperMicro 1028TP-DTR (one node from scale-out chassis) --2x E5-2630v4 --128GB RAM --2x 120GB SSD (RAID 1 for OS) Not using CephFS, but if the MDS are like all the other Ceph bits (MONs in particular) they are likely to