On Mon, 13 Feb 2012, Mandell Degerness wrote:
> What would you recommend as the best recovery strategy when the disk
> holding the monitor data for a given monitor fails?
>
> Should I recover from a periodic back up and re-start the monitor or
> should I treat it as a new monitor being added to th
What would you recommend as the best recovery strategy when the disk
holding the monitor data for a given monitor fails?
Should I recover from a periodic back up and re-start the monitor or
should I treat it as a new monitor being added to the cluster? I
suspect the former, but i'd like confirmat
G'day all
About to commence an R&D eval of the Ceph platform having been impressed with
the momentum achieved over the past 12mths.
I have one question re design before rolling out to metal
I will be using 1x SSD drive per storage server node (assume it is /dev/sdb for
this discussion)
2012/2/13 Székelyi Szabolcs :
> I'm using Ceph 0.41 with the FUSE client. After a while I get stale NFS file
> errors when trying to read a file or list a directory. Logs and scrubbing
> doesn't show any errors or suspicious entries. After remounting the filesystem
> either by restarting the cluste
On 2012. February 13. 15:54:39 Sage Weil wrote:
> On Tue, 14 Feb 2012, Székelyi Szabolcs wrote:
> > No, there's no NFS in the picture. The OSDs' backend storage is on a
> > local filesystem. I think it's the FUSE client telling me this.
>
> Okay, that sounds like a bug then. The two interesting t
Howdy,
It looks like ceph_fs.h contains the mask flags (e.g. CEPH_SETATTR_MODE) used
in ceph_setattr, but I do not see these flags in any header installed from .deb
files (grep /usr/include/*).
Am I missing a location? Should these flags be part of the installed headers?
Thanks,
-Noah--
To uns
On Tue, 14 Feb 2012, Székelyi Szabolcs wrote:
> On 2012. February 13. 15:34:13 Sage Weil wrote:
> > On Tue, 14 Feb 2012, Székelyi Szabolcs wrote:
> > > I'm using Ceph 0.41 with the FUSE client. After a while I get stale NFS
> > > file errors when trying to read a file or list a directory. Logs and
On 2012. February 13. 15:34:13 Sage Weil wrote:
> On Tue, 14 Feb 2012, Székelyi Szabolcs wrote:
> > I'm using Ceph 0.41 with the FUSE client. After a while I get stale NFS
> > file errors when trying to read a file or list a directory. Logs and
> > scrubbing doesn't show any errors or suspicious en
On Tue, 14 Feb 2012, Székelyi Szabolcs wrote:
> I'm using Ceph 0.41 with the FUSE client. After a while I get stale NFS file
> errors when trying to read a file or list a directory. Logs and scrubbing
> doesn't show any errors or suspicious entries. After remounting the
> filesystem
> either by
Hi,
I'm using Ceph 0.41 with the FUSE client. After a while I get stale NFS file
errors when trying to read a file or list a directory. Logs and scrubbing
doesn't show any errors or suspicious entries. After remounting the filesystem
either by restarting the cluster thus forcing the clients to
On Mon, 13 Feb 2012, Dyweni - Ceph-Devel wrote:
> Hi,
>
> That doesn't make sense... Would you explain further?
>
> This page (http://ceph.newdream.net/wiki/Cluster_configuration) says the id
> can be a number or a name, which applies to (mon|mds|ods).$id.
The monitors and mds's are identified b
Hi,
That doesn't make sense... Would you explain further?
This page (http://ceph.newdream.net/wiki/Cluster_configuration) says
the id
can be a number or a name, which applies to (mon|mds|ods).$id.
Will I have problems using SHA1 sums to uniquely identify each OSD in
my
cluster? (i.e. OSD.$s
On Sun, 12 Feb 2012, Jens Rehpoehler wrote:
> > > Hi Liste,
> > >
> > > today i've got another problem.
> > >
> > > ceph -w shows up with an inconsistent PG over night:
> > >
> > > 2012-02-10 08:38:48.701775pg v441251: 1982 pgs: 1981 active+clean, 1
> > > active+clean+inconsistent; 179
On Sun, 12 Feb 2012, Jens Rehpoehler wrote:
> Am 12.02.2012 13:00, schrieb Jens Rehpoehler:
> > > > Hi Liste,
> > > >
> > > > today i've got another problem.
> > > >
> > > > ceph -w shows up with an inconsistent PG over night:
> > > >
> > > > 2012-02-10 08:38:48.701775pg v441251: 1982 pg
On Mon, 13 Feb 2012, eric_yh_c...@wistron.com wrote:
> Hi, all:
>
> For the scalability consideration, we would like to name the first
> harddisk as "00101" on first server.
>
> And named the first harddisk as "00201" on second server. The ceph.conf
> seems like this:
>
> [osd]
> osd data = /srv
On 02/10/2012 05:05 PM, sridhar basam wrote:
> But the server never ACKed that packet. Too busy?
>
> I was collecting vmstat data during the run; here's the important bits:
>
> Fri Feb 10 11:56:51 MST 2012
> vmstat -w 8 16
> procs ---memory-- ---swap-- -i
Hi, all:
For the scalability consideration, we would like to name the first
harddisk as "00101" on first server.
And named the first harddisk as "00201" on second server. The ceph.conf
seems like this:
[osd]
osd data = /srv/osd.$id
osd journal = /srv/osd.$id.journal
osd journal size = 1000
[osd
17 matches
Mail list logo