unsubscribe ceph-users?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Anyone an idea to solver the situation?
Thanks for any advise.
Kind Regards
Harald Rößler
> Am 23.10.2014 um 18:56 schrieb Harald Rößler :
>
> @Wido: sorry I don’t understand what you mean 100%, generated some output
> which may helps.
>
>
> Ok the pool:
>
> pool 3
t;: [
15,
21,
23]},
"empty": 0,
"dne": 0,
"incomplete": 0,
"last_epoch_started": 8576},
"recovery_state": [
{ "name": "Started\/Primary\/Active",
lso had Robert's experience of stuck operations becoming unstuck over
night.
On Tue, Oct 21, 2014 at 12:02 PM, Harald Rößler
mailto:harald.roess...@btd.de>> wrote:
After more than 10 hours the same situation, I don’t think it will fix self
over time. How I can find out what is the proble
performance. Mine currently take about 20
minutes per PG. If all 47 are on the same OSD, it'll be a while. If they're
evenly split between multiple OSDs, parallelism will speed that up.
On Tue, Oct 21, 2014 at 1:22 AM, Harald Rößler
mailto:harald.roess...@btd.de>> wrote:
er weight on full osds, or try changing the osd_near_full_ratio
parameter in your cluster from 85 to for example 89. But i don't know what can
go wrong when you do that.
2014-10-20 17:12 GMT+02:00 Wido den Hollander
mailto:w...@42on.com>>:
On 10/20/2014 05:10 PM, Harald Rößler wrot
yes, tomorrow I will get the replacement of the failed disk, to get a new node
with many disk will take a few days.
No other idea?
Harald Rößler
> Am 20.10.2014 um 16:45 schrieb Wido den Hollander :
>
> On 10/20/2014 04:43 PM, Harald Rößler wrote:
>> Yes, I had some OSD
or more.
Also one of the VM’s doesn’t start because an slow request warning.
Thanks for your advise.
Harald Rößler
> Am 20.10.2014 um 17:12 schrieb Wido den Hollander :
>
> On 10/20/2014 05:10 PM, Harald Rößler wrote:
>> yes, tomorrow I will get the replacement of the failed
t the same time I had a
hardware failure of on disk. :-(. After that failure the recovery process start
at "degraded ~ 13%“ and stops at 7%.
Honestly I am scared in the moment I am doing the wrong operation.
Regards
Harald Rößler
> Am 20.10.2014 um 14:51 schrieb Wido den Hollander :
degraded
(7.491%)
I have tried to restart all OSD in the cluster, but does not help to finish the
recovery of the cluster.
Have someone any idea
Kind Regards
Harald Rößler
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
is active+remapped+backfilling, acting [3,45,37]
Does someone have an idea to kick the process to start again?
Thanks a lot in advanced
Best Regards
Mit freundlichen Grüßen,
Harald Rößler
smime.p7s
Description: S/MIME cryptographic signature
Hi,
at the moment I have a little problem with when it comes to a remapping
of the ceph file system. In VM's with large disk (4 TB each) it comes to
freeze the operating system. The freeze is always accompanied with the
message "[WRN] 1 slow requests". At the moment, bobtail is installed.
Does any
Hi,
I am correct that placement groups without data have no impact on the
performance of a ceph cluster? Like in my case the pools data and rbd.
Thanks for clarification.
http://ceph.com/docs/master/rados/operations/pools/#create-a-pool
When you create a pool, set the number of placement grou
On Mon, 2013-05-13 at 18:55 +0200, Gregory Farnum wrote:
> On Mon, May 13, 2013 at 9:10 AM, Harald Rößler wrote:
> >
> > Hi Together
> >
> > is there a description of how a shared image works in detail? Can such
> > an image can be used for a shared file system on
Hi Together
is there a description of how a shared image works in detail? Can such
an image can be used for a shared file system on two virtual machine
(KVM) to mount. In my case, write on one machine and read only on the
other KVM.Are the changes are visible on the read only KVM?
Thanks
With Re
15 matches
Mail list logo