Cephistas,
I have one other (admittedly minor) issue. The number of Monitors listed
on the dashboard (in the "Mon" section) says "2/2 Quorum", but in the
"Hosts" section it correctly says "3 Reporting 3 Mon/3 OSD".
Any idea how I can get the dashboard to display the correct number of
monitors i
On Tue, 30 Dec 2014 11:25:40 PM Erik Logtenberg wrote:
> f you want to be able to start your osd's with /etc/init.d/ceph init
> script, then you better make sure that /etc/ceph/ceph.conf does link
> the osd's to the actual hostname
I tried again and it was ok for a short while, then *something* mo
Is there a command to do this without decompiling/editing/compiling the crush
set? makes me nervous ...
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
If you want to be able to start your osd's with /etc/init.d/ceph init
script, then you better make sure that /etc/ceph/ceph.conf does link
the osd's to the actual hostname :)
Check out this snippet from my ceph.conf:
[osd.0]
host = ceph-01
osd crush location = "host=ceph-01-ssd root=ssd"
[osd.1
On Tue, 30 Dec 2014 10:38:14 PM Erik Logtenberg wrote:
> No, bucket names in crush map are completely arbitrary. In fact, crush
> doesn't really know what a "host" is. It is just a bucket, like "rack"
> or "datacenter". But they could be called "cat" and "mouse" just as well.
Hmmm, I tried that ea
No, bucket names in crush map are completely arbitrary. In fact, crush
doesn't really know what a "host" is. It is just a bucket, like "rack"
or "datacenter". But they could be called "cat" and "mouse" just as well.
The only reason to use host names is for human readability.
You can then use crus
Hey Lindsay,
Lindsay Mathieson [Wed, Dec 31, 2014 at 06:23:10AM +1000]:
> On Tue, 30 Dec 2014 05:07:31 PM Nico Schottelius wrote:
> > While writing this I noted that the relation / factor is exactly 5.5 times
> > wrong, so I *guess* that ceph treats all hosts with the same weight (even
> > though
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
> As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
> entries. This works great.
you have
- host ceph-01
- host ceph-01-ssd
Don't the host names have to match the real host names?
--
Lindsay
signature.asc
Descriptio
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
> As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
> entries. This works great.
you have
- host ceph-01
- host ceph-01-ssd
Don't the host names have to match the real host names?
--
Lindsay
signature.asc
Descriptio
On Tue, Dec 30, 2014 at 12:38 PM, Erik Logtenberg wrote:
>>
>> Hi Erik,
>>
>> I have tiering working on a couple test clusters. It seems to be
>> working with Ceph v0.90 when I set:
>>
>> ceph osd pool set POOL hit_set_type bloom
>> ceph osd pool set POOL hit_set_count 1
>> ceph osd pool set PO
On Tue, 30 Dec 2014 05:07:31 PM Nico Schottelius wrote:
> While writing this I noted that the relation / factor is exactly 5.5 times
> wrong, so I *guess* that ceph treats all hosts with the same weight (even
> though it looks differently to me in the osd tree and the crushmap)?
I believe If you h
>
> Hi Erik,
>
> I have tiering working on a couple test clusters. It seems to be
> working with Ceph v0.90 when I set:
>
> ceph osd pool set POOL hit_set_type bloom
> ceph osd pool set POOL hit_set_count 1
> ceph osd pool set POOL hit_set_period 3600
> ceph osd pool set POOL cache_target_d
It took me a minute to realize the original for those lines was given
last. LOL
Thanks! Those changes and a restart of Apache worked perfectly.
I'd like to know how those values get populated, and why it changed from
"total_used" to "total_used_bytes", etc.
On Tue, Dec 30, 2014 at 10:39 AM, Mi
On Tue, Dec 30, 2014 at 7:56 AM, Erik Logtenberg wrote:
>
> Hi,
>
> I use a cache tier on SSD's in front of the data pool on HDD's.
>
> I don't understand the logic behind the flushing of the cache however.
> If I start writing data to the pool, it all ends up in the cache pool at
> first. So far
Hi Brian,
I had this problem when I upgraded to firefly (or possibly giant) – At any
rate, the data values changed at some point and calamari needs a slight update.
Check this file:
/opt/calamari/venv/lib/python2.6/site-packages/calamari_rest_api-0.1-py2.6.egg/calamari_rest/views/v1.py
diff v1.p
Cephistas,
I've been running a Ceph cluster for several months now. I started out
with a VM called "master" as the admin node and a monitor and two Dell
servers as OSD nodes (called Node1 and Node2) and also made them monitors
so I had 3 monitors.
After I got that all running fine, I added Cala
Cephistas,
I've been running a Ceph cluster for several months now. I started out
with a VM called "master" as the admin node and a monitor and two Dell
servers as OSD nodes (called Node1 and Node2) and also made them monitors
so I had 3 monitors.
After I got that all running fine, I added Cala
Good evening,
for some time we have the problem that ceph stores too much data on
a host with small disks. Originally we used weight 1 = 1 TB, but
we reduced the weight for this particular host further to keep it
somehow alive.
Our setup currently consists of 3 hosts:
wein: 6x 136G (fest dis
Hi Lindsay,
Actually you just setup two entries for each host in your crush map. One
for hdd's and one for ssd's. My osd's look like this:
# idweight type name up/down reweight
-6 1.8 root ssd
-7 0.45host ceph-01-ssd
0 0.45osd.0 up
Hi,
I use a cache tier on SSD's in front of the data pool on HDD's.
I don't understand the logic behind the flushing of the cache however.
If I start writing data to the pool, it all ends up in the cache pool at
first. So far so good, this was what I expected. However ceph never
starts actually f
I looked at the section for setting up different pools with different OSD's
(e.g SSD Pool):
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
And it seems to make the assumption that the ssd's and platters all live on
separate hosts.
Not the c
Hi.
I have got the same message in Debian Jessie, while the CephFS mounts
and works fine.
Jiri.
On 18/12/2014 01:00, John Spray wrote:
Hmm, from a quick google it appears you are not the only one who has
seen this symptom with mount.ceph. Our mtab code appears to have
diverged a bit from th
Hi Steven,
On 30/12/14 13:26, Steven Sim wrote:
You mentioned that machines see a QEMU IDE/SCSI disk, they don't know
whether its on ceph, NFS, local, LVM, ... so it works OK for any VM
guest SO.
But what if I want to CEPH cluster to serve a whole range of clients
in the data center, rangi
I'm working on something very similar at the moment to present RBD's to ESXi
Hosts.
I'm going to run 2 or 3 VM's on the local ESXi storage to act as iSCSI
"proxy" nodes.
They will run a pacemaker HA setup with the RBD and LIO iSCSI resource
agents to provide a failover iSCSI target which maps bac
Yes, that's still relevant. If you are going to mix different capacities of
disks it might be worth either adjusting the weights all to the same and just
accepting that you will never use the full 3TB, or partition the 3TB into 2
partitions and use the 1st 1TB as the main storage pool mixed with
Hi Eneko,
nope, new pool has all pgs active+clean, not errors during image
creation. The format command just hangs, without error.
Am 30.12.2014 12:33, schrieb Eneko Lacunza:
> Hi Christian,
>
> New pool's pgs also show as incomplete?
>
> Did you notice something remarkable in ceph logs in th
Hi Christian,
New pool's pgs also show as incomplete?
Did you notice something remarkable in ceph logs in the new pools image
format?
On 30/12/14 12:31, Christian Eichelmann wrote:
Hi Eneko,
I was trying a rbd cp before, but that was haning as well. But I
couldn't find out if the source ima
Hi Eneko,
I was trying a rbd cp before, but that was haning as well. But I
couldn't find out if the source image was causing the hang or the
destination image. That's why I decided to try a posix copy.
Our cluster is sill nearly empty (12TB / 867TB). But as far as I
understood (If not, somebody p
Hi Christian,
Have you tried to migrate the disk from the old storage (pool) to the
new one?
I think it should show the same problem, but I think it'd be a much
easier path to recover than the posix copy.
How full is your storage?
Maybe you can customize the crushmap, so that some OSDs are
Hi Nico and all others who answered,
After some more trying to somehow get the pgs in a working state (I've
tried force_create_pg, which was putting then in creating state. But
that was obviously not true, since after rebooting one of the containing
osd's it went back to incomplete), I decided to
On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote:
> have a small setup with such a node (only 4 GB RAM, another 2 good
> nodes for OSD and virtualization) - it works like a charm and CPU max is
> always under 5% in the graphs. It only peaks when backups are dumped to
> its 1TB disk using NFS
Hi,
On 30/12/14 11:55, Lindsay Mathieson wrote:
On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote:
have a small setup with such a node (only 4 GB RAM, another 2 good
nodes for OSD and virtualization) - it works like a charm and CPU max is
always under 5% in the graphs. It only peaks when ba
Hi Steven,
Welcome to the list.
On 30/12/14 11:47, Steven Sim wrote:
This is my first posting and I apologize if the content or query is
not appropriate.
My understanding for CEPH is the block and NAS services are through
specialized (albeit opensource) kernel modules for Linux.
What about
Hi;
This is my first posting and I apologize if the content or query is not
appropriate.
My understanding for CEPH is the block and NAS services are through
specialized (albeit opensource) kernel modules for Linux.
What about the other OS e.g. Solaris, AIX, Windows, ESX ...
If the solution is t
On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:
> ceph 0.87 , Debian 7.5, anyone can help ?
>
> 2014-12-29 20:03 GMT+07:00 debian Only :
> i want to move mds from one host to another.
>
> how to do it ?
>
> what did i do as below, but ceph health not ok, mds was not removed :
>
> root@ceph06-v
Hi,
On 29/12/14 15:12, Christian Balzer wrote:
3rd Node
- Monitor only, for quorum
- Intel Nuc
- 8GB RAM
- CPU: Celeron N2820
Uh oh, a bit weak for a monitor. Where does the OS live (on this and the
other nodes)? The leveldb (/var/lib/ceph/..) of the monitors likes it fast,
SSDs preferably.
On 12/30/2014 09:40 AM, Chen, Xiaoxi wrote:
> Hi,
>First of all, the data is safe since it's persistent in journal, if error
> occurs on OSD data partition, replay the journal will get the data back.
Agree. Data are safe in journal. But when journal is flushed data are
moved to a filestore and
Hi,
First of all, the data is safe since it's persistent in journal, if error
occurs on OSD data partition, replay the journal will get the data back.
And, there is a wbthrottle there, you can config how much data(ios, bytes,
inodes) you wants to remain in memory. A background thread wil
Hi,
On our Ceph cluster from time to time we have some inconsistent PGs
(after deep-scrub). We have some issues with disk/sata cables/lsi
controller causing IO errors from time to time (but that's not the point
in this case).
When IO error occurs on OSD journal partition everything works as is
sh
ceph 0.87 , Debian 7.5, anyone can help ?
2014-12-29 20:03 GMT+07:00 debian Only :
> i want to move mds from one host to another.
>
> how to do it ?
>
> what did i do as below, but ceph health not ok, mds was not removed :
>
> *root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*
> *mds gid 0 dne
40 matches
Mail list logo