20-Mar-16 23:23, Schlacta, Christ пишет:
What do you use as an interconnect between your osds, and your clients?
Two Mellanox 10Gb SFP NIC dual port each = 4 x 10Gbit/s ports on each
server.
On servers each 2 ports bonded, so we have 2 bond for Cluster net and
Storage net.
Clients servers
18-Mar-16 21:15, Schlacta, Christ пишет:
Insofar as I've been able to tell, both BTRFS and ZFS provide similar
capabilities back to CEPH, and both are sufficiently stable for the
basic CEPH use case (Single disk -> single mount point), so the
question becomes this: Which actually provides better
07-Mar-16 21:28, Gregory Farnum пишет:
On Fri, Mar 4, 2016 at 11:56 PM, Mike Almateia wrote:
Hello Cephers!
On my small cluster I see this:
[root@c1 ~]# rados df
pool name KB objects clones degraded unfound
rdrd KB wrwr KB
data
option.
But the cluster has earned again after I add new OSD in the cache tier
pool and 'full OSD' status was dropped.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Sun, Mar 6, 2016 at 2:17 AM, Mike Almateia wrote:
Hello Cephers!
When my cluster
06-Mar-16 17:28, Christian Balzer пишет:
On Sun, 6 Mar 2016 12:17:48 +0300 Mike Almateia wrote:
Hello Cephers!
When my cluster hit "full ratio" settings, objects from cache pull
didn't flush to a cold storage.
As always, versions of everything, Ceph foremost.
Yes of cours
Hello Cephers!
When my cluster hit "full ratio" settings, objects from cache pull
didn't flush to a cold storage.
1. Hit the 'full ratio':
2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster
[WRN] OSD near full (90%)
2016-03-06 11:35:55.447205 osd.64 10.22.11.21:6824/3142
Hello Cephers!
On my small cluster I see this:
[root@c1 ~]# rados df
pool name KB objects clones degraded
unfound rdrd KB wrwr KB
data 0000
06
Hello.
Someone have list of verified/tested SSD drives for Ceph?
I thinking about Ultrastar SSD1600MM SAS SSD for our all-flash Ceph
cluster. Somebody use it in production?
--
Mike, runs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
18-Nov-15 14:39, Sean Redmond пишет:
Hi,
I have a performance question for anyone running an SSD only pool. Let
me detail the setup first.
12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM)
8 X intel DC 3710 800GB
Dual port Solarflare 10GB/s NIC (one front and one back)
Ceph 0.94.5
Ubuntu 14.04 (3
12-Nov-15 03:33, Mike Axford пишет:
On 10 November 2015 at 10:29, Mike Almateia wrote:
Hello.
For our CCTV storing streams project we decided to use Ceph cluster with EC
pool.
Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30
day storing,
99% write operations, a
Hello.
For our CCTV storing streams project we decided to use Ceph cluster with
EC pool.
Input requirements is not scary: max. 15Gbit/s input traffic from CCTV,
30 day storing,
99% write operations, a cluster must has grow up with out downtime.
By now our vision of architecture it like:
* 6 J
11 matches
Mail list logo