Unfortunately 3 hours ago i made a decision about re-init cluster :(
Some data are available via rados, but cluster was unstable, and
migration of data was difficult, on time pression from outside :)
After init a new cluster on one machine, with clean pools i was able
to increase number of pg in
On Tue, 21 Feb 2012, S?awomir Skowron wrote:
> If there is no chance to stabilize this cluster i will try something like
> this.
>
> - stop one machine in cluster.
> - check if its still ok, and data are available
> - make new fs on one machine
> - migrate data by rados via obsync
> - expand new
If there is no chance to stabilize this cluster i will try something like this.
- stop one machine in cluster.
- check if its still ok, and data are available
- make new fs on one machine
- migrate data by rados via obsync
- expand new cluster by second, and third machine
- change keys for radosgw
40 GB in 3 copies in rgw bucket, and some data in RBD, but they can be
destroyed.
Ceph -s reports 224 GB in normal state.
Pozdrawiam
iSS
Dnia 20 lut 2012 o godz. 21:19 Sage Weil napisał(a):
> Ooh, the pg split functionality is currently broken, and we weren't
> planning on fixing it for a whi
Ooh, the pg split functionality is currently broken, and we weren't
planning on fixing it for a while longer. I didn't realize it was still
possible to trigger from the monitor.
I'm looking at how difficult it is to make it work (even inefficiently).
How much data do you have in the cluster?
and this in ceph -w
2012-02-20 20:34:13.531857 log 2012-02-20 20:34:07.611270 osd.76
10.177.64.8:6872/5395 49 : [ERR] mkpg 7.e up [76,11] != acting [76]
2012-02-20 20:34:13.531857 log 2012-02-20 20:34:07.611308 osd.76
10.177.64.8:6872/5395 50 : [ERR] mkpg 7.16 up [76,11] != acting [76]
2012-02
After increase number pg_num from 8 to 100 in .rgw.buckets i have some
serious problems.
pool name category KB objects clones
degraded unfound rdrd KB wr
wr KB
.intent-log - 4662 19