the main problem is not the pg_num,but some other problem about your
network or ceph service AFAIK.
can your parse ceph -s ,ceph osd tree, cat ceph.conf ?
2016-05-23 11:52 GMT+08:00 Albert Archer :
> So, there is no solution at all ?
>
> On Sun, May 22, 2016 at 7:01 PM, Albert Archer
> wrote:
>
So, there is no solution at all ?
On Sun, May 22, 2016 at 7:01 PM, Albert Archer
wrote:
> Hello All.
>
> Determining the number of pgs and pgps is the very hard job (of course for
> newbies like me ).
> The problem is, when we set the number of pgs and pgps for creating a
> pool, it seems there
2016-05-23 10:31 GMT+08:00 Sharuzzaman Ahmat Raslan :
> is your service only have one instance?
> are your service running on vm?
about twenty krbd instance.they are map and mount in three osd machine.
> On May 23, 2016 10:23 AM, "lin zhou" wrote:
>>
>> Christian Balzer,thanks for your reply.
>
is your service only have one instance?
are your service running on vm?
On May 23, 2016 10:23 AM, "lin zhou" wrote:
> Christian Balzer,thanks for your reply.
>
> you say,kernel upgrade will interrupt service.but AFAIK,umount rbd and
> strim
> it on another machine will interrupt service too.
>
>
Christian Balzer,thanks for your reply.
you say,kernel upgrade will interrupt service.but AFAIK,umount rbd and strim
it on another machine will interrupt service too.
In my environment,all krbd are online business,service interruption is
now allow.
2016-05-21 11:58 GMT+08:00 Christian Balzer :
>
Hi,
# ceph -s
cluster 82c855ce-b450-4fba-bcdf-df2e0c958a41
health HEALTH_ERR
5 pgs inconsistent
7 scrub errors
too many PGs per OSD (318 > max 300)
It's HEALTH_ERR above, how to fix up them? Thanks.
___
ceph
I'm contemplating the same thing as well. Or rather, I'm actually doing some
testing. I have a Netlist EV3 and have seen ~6GB/s read and write for any
block size larger than 16k or so, IIRC.
Sebastien Han has a blog page with journal benchmarks, I've added the specifics
there.
This week, I e
I'm contemplating the same thing as well. Or rather, I'm actually doing some
testing. I have a Netlist EV3 and have seen ~6GB/s read and write for any
block size larger than 16k or so, IIRC.
Sebastien Han has a blog page with journal benchmarks, I've added the specifics
there.
This week, I e
I am using Intel P3700DC 400G cards in a similar configuration (two per host) -
perhaps you could look at cards of that capacity to meet your needs.
I would suggest having such small journals would mean you will be constantly
blocking on journal flushes which will impact write performance and l
Hello All.
Determining the number of pgs and pgps is the very hard job (of course for
newbies like me ).
The problem is, when we set the number of pgs and pgps for creating a pool,
it seems there is no way to decrease the pgs for that pool.
I configured 9 OSDs hosts (virtual machine in VMware ESX
10 matches
Mail list logo