Thanks David.
Thanks again Cary.
If I have
682 GB used, 12998 GB / 13680 GB avail,
then I still need to divide 13680/3 (my replication setting) to get what my
total storage really is, right?
Thanks!
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel
3 1.09 291
TOTAL 13680G 682G 12998G 4.99
MIN/MAX VAR: 0.79/1.16 STDDEV: 0.67
Thanks!
-Original Message-
From: Cary [mailto:dynamic.c...@gmail.com]
Sent: Friday, December 15, 2017 4:05 PM
To: James Okken
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard driv
2017 7:13 PM
To: James Okken
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)
James,
Usually once the misplaced data has balanced out the cluster should reach a
healthy state. If you run a "ceph health detail" Ceph will sho
ded, 26 active+recovery_wait+degraded, 1
active+remapped+backfilling, 308 active+clean, 176
active+remapped+wait_backfill; 333 GB data, 370 GB used, 5862 GB / 6233 GB
avail; 0 B/s rd, 334 kB/s wr
-Original Message-
From: Cary [mailto:dynamic.c...@gmail.com]
Sent: Thursday, December 1
Hi all,
Please let me know if I am missing steps or using the wrong steps
I'm hoping to expand my small CEPH cluster by adding 4TB hard drives to each of
the 3 servers in the cluster.
I also need to change my replication factor from 1 to 3.
This is part of an Openstack environment deployed by F
Thanks again Ronny,
Ocfs2 is working well so far.
I have 3 nodes sharing the same 7TB MSA FC lun. Hoping to add 3 more...
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel: 973 967 5179
Email: james.ok...@dialogic.com
Web: www.dialogic.com
Thanks Ric, thanks again Ronny.
I have a lot of good info now! I am going to try ocfs2.
Thanks
-- Jim
-Original Message-
From: Ric Wheeler [mailto:rwhee...@redhat.com]
Sent: Thursday, September 14, 2017 4:35 AM
To: Ronny Aasen; James Okken; ceph-users@lists.ceph.com
Subject: Re: [ceph
me luns/osd's each, just to
learn how to work with ceph.
if you want to have FC SAN attached storage on servers, shareable
between servers in a usable fashion I would rather mount the same SAN
lun on multiple servers and use a cluster filesystem like ocfs or gfs
that is made for this ki
Hi,
Novice question here:
The way I understand CEPH is that it distributes data in OSDs in a cluster. The
reads and writes come across the ethernet as RBD requests and the actual data
IO then also goes across the ethernet.
I have a CEPH environment being setup on a fiber channel disk array (vi
not to do it.
Can you direct me to a good tutorial on how to do so.
And, youre are right, I am a beginner.
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel: 973 967 5179
Email: james.ok...@dialogic.com
Web: www.dialogic.com – The Network Fuel
5494 192.168.1.4:6802/1005494
192.168.1.4:6803/1005494 192.168.0.6:6802/1005494 exists,up
ddfca14e-e6f6-4c48-aa8f-0ebfc765d32f
root@node-1:/var/log#
James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA
Tel: 973 967 5179
Email: james.ok...@dialogic.com&l
11 matches
Mail list logo