I'm just finding my way around the Ceph documentation. What I'm hoping
to build are servers with 24 data disks and one O/S disk. From what I've
read, the recommended configuration is to run 24 separate OSDs (or 23 if
I have a separate journal disk/SSD), and not have any sort of in-server
RAID.
On 24/06/2013 18:41, John Nielsen wrote:
The official documentation is maybe not %100 idiot-proof, but it is
step-by-step:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
If you lose a disk you want to remove the OSD associated with it. This will
trigger a data migration so you a
On 24/06/2013 20:27, Dave Spano wrote:
If you remove the OSD after it fails from the cluster and the
crushmap, the cluster will automatically re-assign that number to the
new OSD when you run ceph osd create with no arguments.
OK - although obviously if you're going to make a disk with a label
I am looking at evaluating ceph for use with large storage nodes (24-36
SATA disks per node, 3 or 4TB per disk, HBAs, 10G ethernet).
What would be the best practice for deploying this? I can see two main
options.
(1) Run 24-36 osds per node. Configure ceph to replicate data to one or
more ot
On 05/08/2013 17:15, Mike Dawson wrote:
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
consider.
Mark Nelson, the Ceph performance
I'm having trouble understanding the description of Placement Group IDs
at http://ceph.com/docs/master/architecture/
There it says:
...
2. CRUSH takes the object ID and hashes it.
3. CRUSH calculates the hash modulo the number of OSDs. (e.g., 0x58) to
get a PG ID.
...
That seems to imply tha
Trying out the "quick" installation instructions, using four Ubuntu
Server 12.04 VMs, ceph-deploy aborts with the following error:
brian@ceph-admin:~/my-cluster$ ceph-deploy install node1 node2 node3
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy install
node1 node2 node3
[ceph
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any file under /etc.
The manpage says that "requiretty" is off by default, but I suppose
Ubuntu could have broken that. So ju
On 03/04/2014 15:51, Brian Candler wrote:
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any file under
/etc.
The manpage says that "requiretty&
A few more minor nits.
(1) at the "ceph-deploy admin ..." step:
...
[ceph-admin][DEBUG ] connected to host: ceph-admin
[ceph-admin][DEBUG ] detect platform information from remote host
[ceph-admin][DEBUG ] detect machine type
[ceph-admin][DEBUG ] get remote short hostname
[ceph-admin][DEBUG ] wr
On 04/04/2014 08:14, Georgios Dimitrakakis wrote:
On 03/04/2014 15:51, Brian Candler wrote:
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any fil
On 03/04/2014 23:43, Brian Beverage wrote:
Here is some info on what I am trying to accomplish. My goal here is to
find the least expensive way to get into Virtualization and storage
without the cost of a SAN and Proprietary software
...
I have been
tasked with taking a new start up project and
On 04/04/2014 08:34, Brian Candler wrote:
It does also work with:
ssh -o RequestTTY=yes node1 'sudo ls'
ssh -o RequestTTY=force node1 'sudo ls'
But strangely, not if I put this in ~/.ssh/config:
$ cat ~/.ssh/config
Host node1
RequestTTY force
Host *
RequestTTY force
On 04/04/2014 12:59, Alfredo Deza wrote:
We test heavily against 12.04 and I use it almost daily for
testing/working with ceph-deploy as well
and have not seen this problem at all.
I have made sure that I have the same SSH version as you:
$ cat /etc/issue
Ubuntu 12.04 LTS \n \l
$ ssh -v
OpenSS
On 04/04/2014 13:39, Brian Candler wrote:
If you create /etc/sudo.conf (not /etc/sudoers!) containing
Path askpass /usr/X11R6/bin/ssh-askpass
Correction:
Path askpass /usr/bin/ssh-askpass
then you don't need the SUDO_ASKPASS incantation.
___
On 04/04/2014 14:11, Alfredo Deza wrote:
Have you set passwordless sudo on the remote host?#
No. Ah... I missed this bit:
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
The reason being that I misread the preceeding instruction:
" For
On 04/04/2014 14:31, Diedrich Ehlerding wrote:
Alfredo Deza writes:
Have you ensured that either there is no firewall up or that the ports
that the monitors need to communicate between each other are open?
Yes, I am sure - the nodes are connected over one single switch, and
no firewall is
On 04/04/2014 15:14, Diedrich Ehlerding wrote:
Hi Brian,
thank you for your, response, however:
Including iptables? CentOS/RedHat default to iptables enabled and
closed.
"iptables -Lvn" to be 100% sure.
hvrrzceph1:~ # iptables -Lvn
iptables: No chain/target/match by that name.
hvrrzceph1:~ #
18 matches
Mail list logo