Hello,
In my Sanity check thread I postulated yesterday that to get the same
redundancy and resilience for disk failures (excluding other factors) as
my proposed setup (2 nodes, 2x 11 3TB HDs RAID6 per node, 2
global hotspares, thus 4 OSDs) the Ceph way one would need need something
like 6 nodes
Yeah, I saw erasure encoding mentioned a little while ago, but that's
likely not to be around by the time I'm going to deploy things.
Nevermind that super bleeding edge isn't my style when it comes to
production systems. ^o^
And at something like 600 disks, that would still have to be a
Yehuda,
Do you have any futher detail on this radosgw bug?
Does it only apply to emperor?
Joel van Velden
On 19/12/2013 5:09 a.m., Yehuda Sadeh wrote:
We were actually able to find the culprit yesterday. While the nginx
workaround might be a valid solution (really depends on who nginx
reads
Dnia 2013-12-19, o godz. 17:39:54
Christian Balzer ch...@gol.com napisał(a):
Hello,
In my Sanity check thread I postulated yesterday that to get the
same redundancy and resilience for disk failures (excluding other
factors) as my proposed setup (2 nodes, 2x 11 3TB HDs RAID6 per node,
2
On 12/19/2013 09:39 AM, Christian Balzer wrote:
Hello,
In my Sanity check thread I postulated yesterday that to get the same
redundancy and resilience for disk failures (excluding other factors) as
my proposed setup (2 nodes, 2x 11 3TB HDs RAID6 per node, 2
global hotspares, thus 4 OSDs) the
Working now. Removed escape char from the api_key,
Thank you so much for your suggestions.
On Thu, Dec 19, 2013 at 4:31 AM, Andrew Woodward xar...@gmail.com wrote:
I see python 2.6 So I assume this is a RHEL6 distro. I've not been able to
use mod-fcgid in any setup on RHEL6 variants. I'd
Hello,
I would like to install ceph on a Netgear ReadyNAS 102.
It is a debian wheezy based.
I have tried to add ceph repository but nas is armel architecture and I
see you provide a repo for armhf architecture.
How can I solve this problem?
Thanks,
Mario
Hi folks,
I'm doing a test for Rados bench now. The cluster is deployed by
ceph-deploy
Ceph version: Emperor
FS : XFS
I created a pool test3 with size1 :
pool 13 'test3' rep size 1 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 2000 pgp_num 2000 last_change 166 owner 0
The Rados bench
2013/12/17 Gandalf Corvotempesta gandalf.corvotempe...@gmail.com:
There isnt' anything about how to define a cluster network for OSD.
I don't know how to set a cluster address to each OSD.
No help about this? I would like to set a cluster-address for each OSD
Is this possible with ceph-deploy ?
On Thu, Dec 19, 2013 at 12:39 AM, Christian Balzer ch...@gol.com wrote:
Hello,
In my Sanity check thread I postulated yesterday that to get the same
redundancy and resilience for disk failures (excluding other factors) as
my proposed setup (2 nodes, 2x 11 3TB HDs RAID6 per node, 2
global
Three things come to my mind when looking at your setup/results:
1)The number of pg's. According to the documentation it should be:
(number of OSD * 100 ) / number of replicas.
Maybe playing with the number a bit would yield better results.
2) Although I am NOT using SSD's as journals, I am
Hello,
When I try to deploy a new monitor on a new node with ceph-deploy, I have this
error:
—
eph@p1:~$ ceph-deploy mon create s4.13h.com
[ceph_deploy.cli][INFO ] Invoked (1.3.3): /usr/bin/ceph-deploy mon create
s4.13h.com
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts
Hi Don,
ceph.conf is readable by all users
Thanks
Song
On Wed, Dec 18, 2013 at 10:19 AM, Don Talton (dotalton)
dotal...@cisco.comwrote:
Check that cinder has access to read your ceph.conf file. I’ve had to
644 mine.
*From:* ceph-users-boun...@lists.ceph.com [mailto:
Ceph FS is really cool and exciting! It makes a lot of sense for us to
leverage it.
Is there any established goal / timelines for using Ceph FS for production
use? Are specific individual support contracts available if Ceph FS is to
be used in production?
Thanks!
On 19 Dec 2013, at 16:43, Gruher, Joseph R joseph.r.gru...@intel.com wrote:
It seems like this calculation ignores that in a large Ceph cluster with
triple replication having three drive failures doesn't automatically
guarantee data loss (unlike a RAID6 array)?
not true with RBD images,
On 12/19/2013 08:39 PM, Wolfgang Hennerbichler wrote:
On 19 Dec 2013, at 16:43, Gruher, Joseph R joseph.r.gru...@intel.com wrote:
It seems like this calculation ignores that in a large Ceph cluster with triple
replication having three drive failures doesn't automatically guarantee data
loss
On 12/19/2013 08:07 PM, Abhijeet Nakhwa wrote:
Ceph FS is really cool and exciting! It makes a lot of sense for us to
leverage it.
Is there any established goal / timelines for using Ceph FS for
production use? Are specific individual support contracts available if
Ceph FS is to be used in
Am 19.12.2013 um 20:39 schrieb Wolfgang Hennerbichler wo...@wogri.com:
On 19 Dec 2013, at 16:43, Gruher, Joseph R joseph.r.gru...@intel.com wrote:
It seems like this calculation ignores that in a large Ceph cluster with
triple replication having three drive failures doesn't automatically
Hi all,
I've been working in some ceph-deploy automation and think I've stumbled on an
interesting behavior. I create a new cluster, and specify 3 machines. If all 3
are not and unable to be ssh'd into with the account I created for ceph-deploy,
then the mon create process will fail and the
How do find or create a user that can use the admin operations for the
object gateway?
The manual says Some operations require that the user holds special
administrative capabilities.
But I can't find if there is a pre setup user with these, or how to create
one myself.
On 12/16/2013 02:42 AM, Christian Balzer wrote:
Hello,
Hi Christian!
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessie, thus ceph
0.48. And yes, I very much prefer a distro supported package.
I know you'd like to use the distro
What impact does rebooting nodes in a ceph cluster have on the health of
the ceph cluster? Can it trigger rebalancing activities that then have
to be undone once the node comes back up?
I have a 4 node ceph cluster each node has 11 osds. There is a single
pool with redundant storage.
If it
On Thu, 19 Dec 2013, John-Paul Robinson wrote:
What impact does rebooting nodes in a ceph cluster have on the health of
the ceph cluster? Can it trigger rebalancing activities that then have
to be undone once the node comes back up?
I have a 4 node ceph cluster each node has 11 osds. There
On 20/12/13 13:51, Sage Weil wrote:
On Thu, 19 Dec 2013, John-Paul Robinson wrote:
What impact does rebooting nodes in a ceph cluster have on the health of
the ceph cluster? Can it trigger rebalancing activities that then have
to be undone once the node comes back up?
I have a 4 node ceph
So is it recommended to adjust the rebalance timeout to align with the time to
reboot individual nodes?
I didn't see this in my pass through the ops manual but maybe I'm not looking
in the right place.
Thanks,
~jpr
On Dec 19, 2013, at 6:51 PM, Sage Weil s...@inktank.com wrote:
On
Hello Mark,
On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
On 12/16/2013 02:42 AM, Christian Balzer wrote:
Hello,
Hi Christian!
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessie, thus
ceph 0.48. And yes, I very
Do you have any futher detail on this radosgw bug?
https://github.com/ceph/ceph/commit/0f36eddbe7e745665a634a16bf3bf35a3d0ac424
https://github.com/ceph/ceph/commit/0b9dc0e5890237368ba3dc34cb029010cb0b67fd
Does it only apply to emperor?
The bug is present in dumpling too.
Hello,
On Thu, 19 Dec 2013 12:12:13 +0100 Mariusz Gronczewski wrote:
Dnia 2013-12-19, o godz. 17:39:54
Christian Balzer ch...@gol.com napisał(a):
[snip]
So am I completely off my wagon here?
How do people deal with this when potentially deploying hundreds of
disks in a single
Hello,
On Thu, 19 Dec 2013 12:42:15 +0100 Wido den Hollander wrote:
On 12/19/2013 09:39 AM, Christian Balzer wrote:
[snip]
I'd suggest to use different vendors for the disks, so that means you'll
probably be mixing Seagate and Western Digital in such a setup.
That's funny, because I
Hello,
On Thu, 19 Dec 2013 15:43:16 + Gruher, Joseph R wrote:
[snip]
It seems like this calculation ignores that in a large Ceph cluster with
triple replication having three drive failures doesn't automatically
guarantee data loss (unlike a RAID6 array)? If your data is triple
On Thu, 19 Dec 2013 21:01:47 +0100 Wido den Hollander wrote:
On 12/19/2013 08:39 PM, Wolfgang Hennerbichler wrote:
On 19 Dec 2013, at 16:43, Gruher, Joseph R joseph.r.gru...@intel.com
wrote:
It seems like this calculation ignores that in a large Ceph cluster
with triple replication
I just realized my email is not clear. If the first mon is up and the
additional initials are not, then the process fails.
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Don Talton (dotalton)
Sent: Thursday,
mon initial members is a race prevention mechanism whose purpose is
to prevent your monitors from forming separate quorums when they're
brought up by automated software provisioning systems (by not allowing
monitors to form a quorum unless everybody in the list is a member).
If you want to add
On 12/19/2013 2:02 PM, Blair Nilsson wrote:
How do find or create a user that can use the admin operations for the
object gateway?
The manual says Some operations require that the user holds special
administrative capabilities.
But I can't find if there is a pre setup user with these, or
On 12/19/2013 04:00 PM, Peder Jansson wrote:
Hi,
I'm testing CEPH with the RBD/QEMU driver through libvirt to store my VM
images on. Installation and configuration all went very well with the
ceph-deploy tool. I have set up authx authentication in libvirt and that
works like a charm too.
On 12/18/2013 09:39 PM, Tim Bishop wrote:
Hi all,
I'm investigating and planning a new Ceph cluster starting with 6
nodes with currently planned growth to 12 nodes over a few years. Each
node will probably contain 4 OSDs, maybe 6.
The area I'm currently investigating is how to configure the
36 matches
Mail list logo