Some Additios: meanwhile we are at the state:
2012-02-22 10:38:49.587403pg v1044553: 2046 pgs: 2036 active+clean,
10 active+clean+inconsistent; 2110 GB data, 4061 GB used, 25732 GB /
29794 GB avail
The active+recovering+remapped+backfill disappeared auf a restart of a
cashed OSD.
The OSD
On Feb 22, 2012, at 1:53 AM, Jens Rehpöhler jens.rehpoeh...@filoo.de wrote:
Some Additios: meanwhile we are at the state:
2012-02-22 10:38:49.587403pg v1044553: 2046 pgs: 2036 active+clean,
10 active+clean+inconsistent; 2110 GB data, 4061 GB used, 25732 GB /
29794 GB avail
The
On Wed, Feb 22, 2012 at 1:39 AM, madhusudhana
madhusudhana.u.acha...@gmail.com wrote:
Hi
I have finally configured a ceph cluster with 8 nodes. I have 2 MDS
servers and 3 monitors and rest of 3 nodes are OSD. Each system has
2T SATA drives. I have 3 partitions created, one for root file
I didn't see him say so, but Sage pulled this in last week...thanks! :)
-Greg
On Wed, Feb 15, 2012 at 8:29 AM, Holger Macht hma...@suse.de wrote:
OSDs (src/osd/ClassHandler.cc) specifically look for libcls_*.so in
/usr/$libdir/rados-classes, so libcls_rbd.so and libcls_rgw.so need to
be
Wido,
Sorry we lost track of this last week — we were all distracted by FAST 12! :)
So it looks like they're both on the same map and osd.4 is sending
pings to osd.19, but osd.19 is just ignoring them? Or do you really
have on debug_os and not debug_osd? :)
-Greg
On Wed, Feb 15, 2012 at 6:12 AM,
On Tue, Feb 21, 2012 at 00:05, Paul Pettigrew
paul.pettig...@mach.com.au wrote:
Looking forward to upgrading our Ceph cluster from v0.41 to v0.42, started to
compile the .deb packages per the procedure below (see end of email).
[grab a clean source]
dpkg-buildpackage -j16
...
However, the
Hi Gregory,
On 22.02.2012 18:12, Gregory Farnum wrote:
On Feb 22, 2012, at 1:53 AM, Jens Rehpöhler jens.rehpoeh...@filoo.de
wrote:
Some Additios: meanwhile we are at the state:
2012-02-22 10:38:49.587403pg v1044553: 2046 pgs: 2036 active+clean,
10 active+clean+inconsistent; 2110 GB
Complete configuration initialization for special actions, and
hold Resetter lock while running reset.
Signed-off-by: Alexandre Oliva ol...@lsd.ic.unicamp.br
---
src/ceph_mds.cc |1 +
src/mds/Resetter.cc |2 ++
2 files changed, 3 insertions(+), 0 deletions(-)
diff --git
This was supposed to fix bug 1946, and likely bug 1849 too, but it looks
like something's still missing for a complete fix. fuse-unmounting
between touching a dir and creating a snapshot seems to help get correct
snapshot timestamp, but touching the dir after remounting and restarting
the mds
Hey-
Can you look at the wip-mds-resetter branch? I think that is closer to
the correct locking. We need to hold the mutex for ms_dispatch (teh calls
into objecter in particular), and to drop it when we block...
Thanks!
sage
On Tue, 21 Feb 2012, Alexandre Oliva wrote:
Complete
Applied, with a small cleanup (wip-mds-old-inodes).. I'll merge it in
tomorrow after a bit of testing.
On Tue, 21 Feb 2012, Alexandre Oliva wrote:
This was supposed to fix bug 1946, and likely bug 1849 too, but it looks
like something's still missing for a complete fix. fuse-unmounting
Tommi Virtanen tommi.virtanen at dreamhost.com writes:
On Wed, Feb 22, 2012 at 01:39, madhusudhana
madhusudhana.u.acharya at gmail.com wrote:
Hi
I have finally configured a ceph cluster with 8 nodes. I have 2 MDS
servers and 3 monitors and rest of 3 nodes are OSD. Each system has
2T
12 matches
Mail list logo