For that matter, is there a way to build Calamari without going the whole
vagrant path at all? Some way of just building it through command-line tools?
I would be building it on an Openstack instance, no GUI. Seems silly to have
to install an entire virtualbox environment inside something
...@me.commailto:ste...@me.com wrote:
On 26/03/2015, at 17.18, LaBarre, James (CTR) A6IT
james.laba...@cigna.commailto:james.laba...@cigna.com wrote:
For that matter, is there a way to build Calamari without going the whole
vagrant path at all? Some way of just building it through command-line tools?
I would
If I have a machine/VM I am using as an Admin node for a ceph cluster, can I
relocate that admin to another machine/VM after I've built a cluster? I would
expect as the Admin isn't an actual operating part of the cluster itself (other
than Calamari, if it happens to be running) the rest of the
We were building a test cluster here, and I enabled MDS in order to use
ceph-fuse to fill the cluster with data. It seems the metadata server is
having problems, so I figured I'd just remove it and rebuild it. However, the
ceph-deploy mds destroy command is not implemented; it appears that
Is there a tool to show a the layout of PGs in a cluster (which OSD/node that
duplicate PGs are placed in? Something like a table with the PG number on one
side, and columns representing nodes/OSDs, with the OSD containing a PG filled
in/marked?
I was trying to install the development version of Ceph (0.84) on a cluster,
using ceph-deploy and trying not to have to copy in repo files and other hacks
onto the mon/OSD nodes. The problem is, it seems to presume it knows the right
URL to install from, and it's not taking the settings from
Having heard some suggestions on RAID configuration under Gluster (we have
someone else doing that evaluation, I'm doing the Ceph piece), I'm wondering
what (if any) RAID configurations would be recommended for Ceph. I have the
impression that striping data could counteract/undermine data
Just out of curiosity, is there a way to mount a Ceph filesystem directly on a
MSWindows system (2008 R2 server)? Just wanted to try something out from a VM.
--
CONFIDENTIALITY NOTICE: If you have received this email
: Gregory Farnum [mailto:g...@inktank.com]
Sent: Tuesday, August 26, 2014 4:28 PM
To: LaBarre, James (CTR) A6IT; ceph-users
Subject: Re: [ceph-users] Ceph-fuse fails to mount
[Re-added the list.]
I believe you'll find everything you need at
http://ceph.com/docs/master/cephfs/createfs/
-Greg
Never mind, I found it (ceph osd lspools). And since it was just one set of
data/metadata, those were the values.
-Original Message-
From: LaBarre, James (CTR) A6IT
Sent: Wednesday, August 27, 2014 10:04 AM
To: 'Gregory Farnum'; ceph-users
Subject: RE: [ceph-users] Ceph-fuse fails
I have built a couple of ceph test clusters, and am attempting to mount the
storage through ceph-fuse on a RHEL 6.4 VM (the clusters are also in VMs). The
first one I built under v0.80, using directories for the ceph OSDs (as per the
Storage Cluster Quick Start at
I understand the concept with Ceph being able to recover from the failure of an
OSD (presumably with a single OSD being on a single disk), but I'm wondering
what the scenario is if an OSD server node containing multiple disks should
fail. Presuming you have a server containing 8-10 disks,
Is there a repo for this version which works over HTTPS? Because of the
corporate firewall, I can’t install through regular HTTP.
--
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately
...@inktank.com]
Sent: Tuesday, August 19, 2014 1:00 PM
To: LaBarre, James (CTR) A6IT
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] v0.84 released
Both: https://ceph.com/debian-testing/ and https://ceph.com/rpm-testing/ seem
to work for me. Are you seeing some error?
On Tue, Aug 19, 2014 at 11:57 AM
14 matches
Mail list logo