> Xie, does that sound right?
yeah, looks right to me.
原始邮件
发件人:SageWeil
收件人:Sergey Dolgov ;
抄送人:Gregory Farnum ;ceph-users@lists.ceph.com
;ceph-de...@vger.kernel.org
;谢型果10072465;
日 期 :2019年01月03日 11:05
主 题 :Re: [ceph-users] size of inc_osdmap vs osdmap
I think that code
I think that code was broken by
ea723fbb88c69bd00fefd32a3ee94bf5ce53569c and should be fixed like so:
diff --git a/src/mon/OSDMonitor.cc b/src/mon/OSDMonitor.cc
index 8376a40668..12f468636f 100644
--- a/src/mon/OSDMonitor.cc
+++ b/src/mon/OSDMonitor.cc
@@ -1006,7 +1006,8 @@ void OSDMonitor::prime
>
> Well those commits made some changes, but I'm not sure what about them
> you're saying is wrong?
>
I mean, that all pgs have "up == acting && next_up == next_acting" but at
https://github.com/ceph/ceph/blob/luminous/src/mon/OSDMonitor.cc#L1009
condition
"next_up != next_acting" false and we cl
Thanks Grag
I dumped inc_osdmap to file
ceph-dencoder type OSDMap::Incremental import
./inc\\uosdmap.1378266__0_B7F36FFA__none decode dump_json > inc_osdmap.txt
There are 52330 pgs(cluster has 52332 pgs) in structure 'new_pg_temp' and
for all of them osd is empty. For examle short excerpt:
>
{
On Thu, Dec 27, 2018 at 1:20 PM Sergey Dolgov wrote:
> We investigated the issue and set debug_mon up to 20 during little change
> of osdmap get many messages for all pgs of each pool (for all cluster):
>
>> 2018-12-25 19:28:42.426776 7f075af7d700 20 mon.1@0(leader).osd e1373789
>> prime_pg_tempn
We investigated the issue and set debug_mon up to 20 during little change
of osdmap get many messages for all pgs of each pool (for all cluster):
> 2018-12-25 19:28:42.426776 7f075af7d700 20 mon.1@0(leader).osd e1373789
> prime_pg_tempnext_up === next_acting now, clear pg_temp
> 2018-12-25 19:28:4
Those are sizes in file system. I use filestore as a backend
On Wed, Dec 12, 2018, 22:53 Gregory Farnum Hmm that does seem odd. How are you looking at those sizes?
>
> On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov wrote:
>
>> Greq, for example for our cluster ~1000 osd:
>>
>> size osdmap.1357881
Hmm that does seem odd. How are you looking at those sizes?
On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov wrote:
> Greq, for example for our cluster ~1000 osd:
>
> size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860,
> modified 2018-12-12 04:00:17.661731)
> size osdmap.1357882__0_F
Greq, for example for our cluster ~1000 osd:
size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860,
modified 2018-12-12 04:00:17.661731)
size osdmap.1357882__0_F7FE772D__none = 363KB
size osdmap.1357883__0_F7FE74FD__none = 363KB (crush_version 9861,
modified 2018-12-12 04:00:27.385702)
On Wed, Dec 5, 2018 at 3:32 PM Sergey Dolgov wrote:
> Hi guys
>
> I faced strange behavior of crushmap change. When I change crush
> weight osd I sometimes get increment osdmap(1.2MB) which size is
> significantly bigger than size of osdmap(0.4MB)
>
This is probably because when CRUSH changes,
Hi guys
I faced strange behavior of crushmap change. When I change crush
weight osd I sometimes get increment osdmap(1.2MB) which size is
significantly bigger than size of osdmap(0.4MB)
I use luminois 12.2.8. Cluster was installed a long ago, I suppose
that initially it was firefly
How can I view
11 matches
Mail list logo