Re: [ceph-users] ZFS or BTRFS for performance?

2016-03-22 Thread Mike Almateia
20-Mar-16 23:23, Schlacta, Christ пишет: What do you use as an interconnect between your osds, and your clients? Two Mellanox 10Gb SFP NIC dual port each = 4 x 10Gbit/s ports on each server. On servers each 2 ports bonded, so we have 2 bond for Cluster net and Storage net. Clients servers

Re: [ceph-users] ZFS or BTRFS for performance?

2016-03-20 Thread Mike Almateia
18-Mar-16 21:15, Schlacta, Christ пишет: Insofar as I've been able to tell, both BTRFS and ZFS provide similar capabilities back to CEPH, and both are sufficiently stable for the basic CEPH use case (Single disk -> single mount point), so the question becomes this: Which actually provides better

Re: [ceph-users] Infernalis 9.2.1: the "rados df"ommand show wrong data

2016-03-07 Thread Mike Almateia
07-Mar-16 21:28, Gregory Farnum пишет: On Fri, Mar 4, 2016 at 11:56 PM, Mike Almateia wrote: Hello Cephers! On my small cluster I see this: [root@c1 ~]# rados df pool name KB objects clones degraded unfound rdrd KB wrwr KB data

Re: [ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-07 Thread Mike Almateia
option. But the cluster has earned again after I add new OSD in the cache tier pool and 'full OSD' status was dropped. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Sun, Mar 6, 2016 at 2:17 AM, Mike Almateia wrote: Hello Cephers! When my cluster

Re: [ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-07 Thread Mike Almateia
06-Mar-16 17:28, Christian Balzer пишет: On Sun, 6 Mar 2016 12:17:48 +0300 Mike Almateia wrote: Hello Cephers! When my cluster hit "full ratio" settings, objects from cache pull didn't flush to a cold storage. As always, versions of everything, Ceph foremost. Yes of cours

[ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-06 Thread Mike Almateia
Hello Cephers! When my cluster hit "full ratio" settings, objects from cache pull didn't flush to a cold storage. 1. Hit the 'full ratio': 2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster [WRN] OSD near full (90%) 2016-03-06 11:35:55.447205 osd.64 10.22.11.21:6824/3142

[ceph-users] Infernalis 9.2.1: the "rados df"ommand show wrong data

2016-03-04 Thread Mike Almateia
Hello Cephers! On my small cluster I see this: [root@c1 ~]# rados df pool name KB objects clones degraded unfound rdrd KB wrwr KB data 0000 06

[ceph-users] Vierified and tested SAS/SATA SSD for Ceph

2015-11-24 Thread Mike Almateia
Hello. Someone have list of verified/tested SSD drives for Ceph? I thinking about Ultrastar SSD1600MM SAS SSD for our all-flash Ceph cluster. Somebody use it in production? -- Mike, runs. ___ ceph-users mailing list ceph-users@lists.ceph.com http://l

Re: [ceph-users] All SSD Pool - Odd Performance

2015-11-18 Thread Mike Almateia
18-Nov-15 14:39, Sean Redmond пишет: Hi, I have a performance question for anyone running an SSD only pool. Let me detail the setup first. 12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM) 8 X intel DC 3710 800GB Dual port Solarflare 10GB/s NIC (one front and one back) Ceph 0.94.5 Ubuntu 14.04 (3

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-12 Thread Mike Almateia
12-Nov-15 03:33, Mike Axford пишет: On 10 November 2015 at 10:29, Mike Almateia wrote: Hello. For our CCTV storing streams project we decided to use Ceph cluster with EC pool. Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 day storing, 99% write operations, a

[ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-10 Thread Mike Almateia
Hello. For our CCTV storing streams project we decided to use Ceph cluster with EC pool. Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 day storing, 99% write operations, a cluster must has grow up with out downtime. By now our vision of architecture it like: * 6 J